root/proc File System/sys File Systemlpddf -hCopyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Guides you through the installation process and the basic configuration of your system. The Quick Start section shows a quick walk through the installation using default values. The second part of this chapter provides details for every installation step.
Introduces YaST, the central tool for installation and configuration of your system. Learn how to initially set up your system and how to modify key components of your system.
Understand how to install or remove software with either YaST or using the command line, how to use the 1-Click Install feature, and how to keep your system up-to-date.
Learn how to work with the bash shell, the default command line interpreter on openSUSE Leap. Get to know the most commonly used Linux commands and understand basic concepts of a Linux system.
Provides an overview of where to find help and additional documentation in case you need more information or want to perform specific tasks with your system. Also find a compilation of the most frequent problems and annoyances and learn how to solve these problems on your own.
Documentation for our products is available at http://doc.opensuse.org/, where you can also find the latest updates, and browse or download the documentation in various formats.
In addition, the product documentation
is usually available in your installed system under
/usr/share/doc/manual.
The following documentation is available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Several feedback channels are available:
To report bugs for openSUSE Leap, go to https://bugzilla.opensuse.org/, log in, and click .
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.com. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a concise
description of the problem and refer to the respective section number and
page (or URL).
The following notices and typographical conventions are used in this documentation:
/etc/passwd: directory names and file names
PLACEHOLDER: replace PLACEHOLDER with the actual value
PATH: the environment variable PATH
ls, --help: commands, options, and
parameters
user: users or groups
package name : name of a package
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
Commands that must be run with root privileges. Often you can also
prefix these commands with the sudo command to run them
as non-privileged user.
root #commandtux >sudocommand
Commands that can be run by non-privileged users.
tux >command
Notices
Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.
Important information you should be aware of before proceeding.
Additional information, for example about differences in software versions.
Helpful information, like a guideline or a piece of practical advice.
This documentation is written in SUSEDoc, a subset of
DocBook 5.
The XML source files were validated by jing (see
https://code.google.com/p/jing-trang/), processed by
xsltproc, and converted into XSL-FO using a customized
version of Norman Walsh's stylesheets. The final PDF is formatted through FOP
from
Apache
Software Foundation. The open source tools and the environment used to
build this documentation are provided by the DocBook Authoring and Publishing
Suite (DAPS). The project's home page can be found at
https://github.com/openSUSE/daps.
The XML source code of this documentation can be found at https://github.com/SUSE/doc-sle.
The source code of openSUSE Leap is publicly available. Refer to http://en.opensuse.org/Source_code for download links and more information.
With a lot of voluntary commitment, the developers of Linux cooperate on a global scale to promote the development of Linux. We thank them for their efforts—this distribution would not exist without them. Special thanks, of course, goes to Linus Torvalds.
Use the following procedures to install a new version of openSUSE® Leap 42.3. This document gives a quick overview on how to run through a default installation of openSUSE Leap on the x86_64 architecture.
openSUSE Leap allows setting several parameters during boot, for example choosing the source of the installation data or setting the network configuration.
This chapter describes the procedure in which the data for openSUSE Leap is copied to the target device. Some basic configuration parameters for the newly installed system are set during the procedure. A graphical user interface will guide you through the installation.
This section highlights some typical problems you may run into during installation and offers possible solutions or workarounds.
Use the following procedures to install a new version of openSUSE® Leap 42.3. This document gives a quick overview on how to run through a default installation of openSUSE Leap on the x86_64 architecture.
For more detailed installation instructions see Chapter 3, Installation with YaST.
any AMD64/Intel* EM64T processor (32-bit processors are not supported)
1 GB physical RAM (2 GB or more recommended)
3 GB available disk space for a minimal install, 5 GB available for a graphical desktop (more recommended)
Supports most modern sound and graphics cards, 800 x 600 display resolution (1024 x 768 or higher recommended)
Use these instructions if there is no existing Linux system on your machine, or if you want to replace an existing Linux system.
Insert the openSUSE Leap DVD into the drive, then reboot the computer to start the installation program. On machines with a traditional BIOS you will see the graphical boot screen shown below. On machines equipped with UEFI, a slightly different boot screen is used. Secure boot on UEFI machines is supported.
Use F2 to change the language for the installer. A corresponding keyboard layout is chosen automatically. See Section 2.2.1, “The Boot Screen on Machines Equipped with Traditional BIOS” or Section 2.2.2, “The Boot Screen on Machines Equipped with UEFI” for more information about changing boot options.
Select on the boot screen, then press Enter. This boots the system and loads the openSUSE Leap installer.
The and are initialized with the language settings you have chosen on the boot screen. Change them here, if necessary.
Read the License Agreement. It is presented in the language you have chosen on the boot screen. are available. Proceed with .
In case no network interface could be configured automatically via DHCP, the dialog opens. If you prefer to install openSUSE Leap with no network connection, choose to proceed. However, configuring the network at this stage is recommended, since it will allow to install the latest updates and security fixes from the online update repository. A working network connection will also give you access to additional software repositories. This step is skipped if a network interface was successfully configured via DHCP.
To configure the network, choose a network interface from the list and click to change its settings. Use the tabs to configure DNS and routing. See Section 13.4, “Configuring a Network Connection with YaST” for more details.
A system analysis is performed, where the installer probes for storage devices, and tries to find other installed systems. When the analysis has finished, the dialog opens. Review the partition setup proposed by the system. If necessary, change it. You have the following options:
Lets you change options for the proposed settings, but not the suggested partition layout itself.
Select a disk to which to apply the proposal.
Opens the described in Section 5.1, “Using the YaST Partitioner”.
To accept the proposed setup without any changes, choose to proceed.
Select the clock and time zone to use in your system. To manually adjust the time or to configure an NTP server for time synchronization, choose . See Section 3.6, “Clock and Time Zone” for detailed information. Proceed with .
Select the desktop system you would like to use in the dialog. and are among the most widely used desktops on Linux.
If setting up a server, you probably do not need a graphical user interface. Choose in this case.
More desktop systems, such as XFCE, LXDE, MATE, and Enlightenment are available after having enabled the online repositories. Doing so is also recommended if you want to get the latest security updates and fixes during the installation. A working Internet connection is required. To install a user interface, choose . You have the following choices:
The contains open source software (OSS). Compared to the DVD installation media, it contains many additional software packages, among them the above mentioned desktop systems. Choose this repository to install them.
The contains security updates and fixes for packages from the and the DVD installation media. Choosing this repository is recommended for all installation scenarios.
The contains packages with a proprietary software license. Choosing it is not required for installing a custom desktop system.
Choosing is recommended when also having chosen the . It contains the respective updates and security fixes.
All other repositories are intended for experienced users and developers. Click on a repository name to get more information.
Confirm your selection with . Depending on your choice, you need to confirm one or more license agreements. Do so by choosing until you return to the screen. Now choose and , to proceed to the , where you can choose a custom desktop system from the left-hand pane.
To create a local user, type the first and last name in the field, the login name in the field, and the password in the field.
The password should be at least eight characters long and should contain both uppercase and lowercase letters and numbers. The maximum length for passwords is 72 characters, and passwords are case-sensitive.
For security reasons it is also strongly recommended
not to enable the . You should also not but rather
provide a separate root password in the next installation
step.
If you install on a system where a previous Linux installation was found, you may . Click for a list of available user accounts. Select one or more user.
In an environment where users are centrally managed (for example by NIS or LDAP) you may want to skip the creation of local users. Select in this case.
Proceed with .
Type a password for the system administrator account (called the
root user). This step is skipped if you have chosen in the previous
step.
You should never forget the root password! After you entered
it here, the password cannot be retrieved. See
Section 3.9, “Password for the System Administrator root” for more information. Proceed
with .
It is recommended to only use characters that are available on an English keyboard. In case of a system error or when you need to start your system in rescue mode a localized keyboard might not be available.
Use the screen to review and—if necessary—change several proposed installation settings. The current configuration is listed for each setting. To change it, click the headline. Some settings, such as firewall or SSH can directly be changed by clicking the respective links.
Changes you can make in the , can also be made later at any time from the installed system. However, if you need remote access directly after the installation, you should adjust the settings by opening the SSH port and enabling the SSH server.
This section shows the boot loader configuration. Changing the defaults is only recommended if really needed. Refer to Chapter 12, The Boot Loader GRUB 2 for details.
The default scope of software includes the base system and X Window with the selected desktop. Clicking opens the screen, where you can change the software selection by selecting or deselecting patterns. Each pattern contains several software packages needed for specific functions (for example, Web and LAMP server or a print server). For a more detailed selection based on software packages to install, select to switch to the YaST . See Chapter 11, Installing or Removing Software for more information.
If you have chosen to install a desktop system, the system boots into the target, with network, multiuser and display manager support. If you have not installed a desktop, the system boots into a login shell ().
View detailed hardware information by clicking . In the resulting screen you can also change —see Section 3.10.5, “” for more information.
By default, the Firewall is enabled with all network interfaces
configured for the public zone. See
Section 15.4, “firewalld” for configuration
details.
The SSH service is disabled by default, its port (22) is closed. Therefore logging in from remote is not possible by default. Click to toggle these settings.
After you have finalized the system configuration on the screen, click . Depending on your software selection you may need to agree to license agreements before the installation confirmation screen pops up. Up to this point no changes have been made to your system. After you click a second time, the installation process starts.
During the installation, the progress is shown in detail on the tab.
After the installation routine has finished, the computer is rebooted into the installed system. Log in and start YaST to fine-tune the system. If you are not using a graphical desktop or are working from remote, refer to Chapter 1, YaST in Text Mode for information on using YaST from a terminal.
Using the appropriate set of boot parameters helps simplify your installation
procedure. Many parameters can also be configured later using the linuxrc
routines, but using the boot parameters is easier. In some automated setups,
the boot parameters can be provided with initrd or an
info file.
The way the system is started for the installation depends on the architecture—system start-up is different for PC (AMD64/Intel 64) or mainframe, for example. If you install openSUSE Leap as a VM Guest on a KVM or Xen hypervisor, follow the instructions for the AMD64/Intel 64 architecture.
The boot parameters are described in detail in Chapter 3, Installation with YaST. Generally, selecting starts the installation boot process.
If problems occur, use or . For more information about troubleshooting the installation process, refer to Chapter 4, Troubleshooting.
The menu bar at the bottom of the screen offers some advanced functionality needed in some setups. Using the function keys (F1 ... F12), you can specify additional options to pass to the installation routines without having to know the detailed syntax of these parameters (see Chapter 2, Boot Parameters). A detailed description of the available function keys is available in Section 2.2.1, “The Boot Screen on Machines Equipped with Traditional BIOS”.
The boot screen displays several options for the installation procedure. boots the installed system and is selected by default, because the CD is often left in the drive. Select one of the other options with the arrow keys and press Enter to boot it. The relevant options are:
The normal installation mode. All modern hardware functions are enabled. In case the installation fails, see F5 for boot parameters that disable potentially problematic functions.
Perform a system upgrade. For more information refer to Chapter 14, Upgrading the System and System Changes.
Starts a minimal Linux system without a graphical user interface. For more information, see Section 18.5.2, “Using the Rescue System”.
This option is only available when you install from media created from downloaded ISOs. In this case it is recommended to check the integrity of the installation medium. This option starts the installation system before automatically checking the media. In case the check was successful, the normal installation routine starts. If a corrupt media is detected, the installation routine aborts. Replace the broken medium and restart the installation process.
Tests your system RAM using repeated read and write cycles. Terminate the test by rebooting. For more information, see Section 4.4, “Fails to Boot”.
Use the function keys shown at the bottom of the screen to change the language, screen resolution, installation source or to add an additional driver from your hardware vendor:
Get context-sensitive help for the active element of the boot screen. Use the arrow keys to navigate, Enter to follow a link, and Esc to leave the help screen.
Select the display language and a corresponding keyboard layout for the installation. The default language is English (US).
Select various graphical display modes for the installation. By
the video resolution is automatically
determined using KMS (“Kernel Mode Setting”). If this setting does not
work on your system, choose and, optionally,
specify vga=ask on the boot command line to get
prompted for the video resolution. Choose
if the graphical installation causes problems.
Normally, the installation is performed from the inserted installation medium. Here, select other sources, like FTP or NFS servers. If the installation is deployed on a network with an SLP server, select an installation source available on the server with this option.
If you encounter problems with the regular installation, this menu offers to disable a few potentially problematic functions. If your hardware does not support ACPI (advanced configuration and power interface) select to install without ACPI support. disables support for APIC (Advanced Programmable Interrupt Controllers) which may cause problems with some hardware. boots the system with the DMA mode (for CD/DVD-ROM drives) and power management functions disabled.
If you are not sure, try the following options first: or . Experts can also use the command line () to enter or change kernel parameters.
Press this key to notify the system that you have an optional driver update for openSUSE Leap. With or , load drivers directly before the installation starts. If you select , you are prompted to insert the update disk at the appropriate point in the installation process.
UEFI (Unified Extensible Firmware Interface) is a new industry standard which replaces and extends the traditional BIOS. The latest UEFI implementations contain the “Secure Boot” extension, which prevents booting malicious code by only allowing signed boot loaders to be executed. See Chapter 14, UEFI (Unified Extensible Firmware Interface) for more information.
The boot manager GRUB 2, used to boot machines with a traditional BIOS,
does not support UEFI, therefore GRUB 2 is replaced with GRUB 2 for EFI. If
Secure Boot is enabled, YaST will automatically select GRUB 2 for EFI for
installation. From an administrative and user perspective, both
boot manager implementations behave the same and are called
GRUB 2 in the following.
When installing with Secure Boot enabled, you cannot load drivers that are not shipped with openSUSE Leap. This is also true of drivers shipped via SolidDriver/PLDP, because their signing key is not trusted by default.
To load drivers not shipped with openSUSE Leap, do either of the following:
Before the installation, add the needed keys to the firmware database via firmware/system management tools.
Use a bootable ISO that will enroll the needed keys in the MOK list on the first boot.
For more information, see Section 14.1, “Secure Boot”.
The boot screen displays several options for the installation procedure. Change the selected option with the arrow keys and press Enter to boot it. The relevant options are:
The normal installation mode.
Perform a system upgrade.
Starts a minimal Linux system without a graphical user interface. For more information, see Section 18.5.2, “Using the Rescue System”.
This option is only available when you install from media created from downloaded ISOs. In this case it is recommended to check the integrity of the installation medium. This option starts the installation system before automatically checking the media. In case the check was successful, the normal installation routine starts. If a corrupt media is detected, the installation routine aborts.
GRUB 2 for EFI on openSUSE Leap does not support a boot prompt or function keys for adding boot parameters. By default, the installation will be started with American English and the boot media as the installation source. A DHCP lookup will be performed to configure the network. To change these defaults or to add additional boot parameters you need to edit the respective boot entry. Highlight it using the arrow keys and press E. See the on-screen help for editing hints (note that only an English keyboard is available now). The entry will look similar to the following:
setparams 'Installation' set gfxpayload=keep echo 'Loading kernel ...' linuxefi /boot/x86_64/loader/linux splash=silent echo 'Loading initial ramdisk ...' initrdefi /boot/x86_64/loader/initrd
Add space-separated parameters to the end of the line starting with
linuxefi. To boot the edited entry, press
F10. If you access the machine via serial console, press
Esc–0. A
complete list of parameters is available at
http://en.opensuse.org/Linuxrc.
autoyast=URL
The autoyast parameter specifies the location of the
autoinst.xml control file for automatic
installation.
manual=<0|1>
The manual parameter controls if the other
parameters are only default values that still must be acknowledged by
the user. Set this parameter to 0 if all values
should be accepted and no questions asked. Setting
autoyast implies setting Manual to
0.
Info=URL
Specifies a location for a file from which to read additional options.
upgrade=<0|1>To upgrade your SUSE Linux Enterprise, specify Upgrade=1.
dud=URLLoad driver updates from URL.
Set dud=ftp://ftp.example.com/PATH_TO_DRIVER
or dud=http://www.example.com/PATH_TO_DRIVER
to load drivers from an URL. When dud=1 you will
be asked for the URL during boot.
language=LANGUAGE
Set the installation language. Some supported values are
cs_CZ, de_DE,
es_ES, fr_FR,
ja_JP, pt_BR,
pt_PT, ru_RU,
zh_CN, and zh_TW.
acpi=offDisable ACPI support.
noapicNo logical APIC.
nomodesetDisable KMS.
textmode=1Start installer in text mode.
console=SERIAL_DEVICE[,MODE]
SERIAL_DEVICE can be an actual serial or parallel
device (for example ttyS0) or a virtual terminal
(for example tty1). MODE
is the baud rate, parity and stop bit (for example 9600n8).
The default for this setting is set by the motherboard firmware. If you do
not see output on your monitor, try setting console=tty1.
It is possible to define multiple devices.
The settings discussed in this section apply only to the network interface used during installation. Configure additional network interfaces in the installed system by following the instructions given in Section 13.6, “Configuring a Network Connection Manually”.
The network will only be configured if it is required during the installation.
To force the network to be configured, use the netsetup
parameter.
netsetup=VALUE
netsetup=dhcp forces a configuration via DHCP.
Set netsetup=-dhcp when configuring the network
with with the boot parameters hostip,
gateway and nameserver.
With the option
netsetup=hostip,netmask,gateway,nameserver the
installer asks for the network settings during boot.
ifcfg=INTERFACE[.VLAN]=SETTINGS
INTERFACE can be * to
match all interfaces or, for example, eth* to match
all interfaces that start with eth. It is also
possible to use MAC addresses as values.
Optionally, a VLAN can bet set behind the interface name, separated by a period.
If SETTINGS is dhcp, all
matching interfaces will be configured with DHCP. Is is possible to
set static parameters. With static parameters, only the first matching
interface will be configured. The syntax for the static configuration is:
ifcfg=*="IPS_NETMASK,GATEWAYS,NAMESERVERS,DOMAINS"
Each comma separated value can in turn contain a list of space character
separated values. IPS_NETMASK is in the
CIDR notation, for example
10.0.0.1/24. The quotes are only needed when using
space character separated lists. Example with 2 name servers:
ifcfg=*="10.0.0.10/24,10.0.0.1,10.0.0.1 10.0.0.2,example.com"
hostname=host.example.com
Enter the fully qualified host name.
domain=example.com
Domain search path for DNS. Allows you to use short host names instead of fully qualified ones.
hostip=192.168.1.2[/24]
Enter the IP address of the interface to configure. The IP can contain
the subnet mask, for example hostip=192.168.1.2/24.
This setting is only evaluated if the network is required during the
installation.
gateway=192.168.1.3
Specify the gateway to use. This setting is only evaluated if the network is required during the installation.
nameserver=192.168.1.4
Specify the DNS server in charge. This setting is only evaluated if the network is required during the installation.
domain=example.com
Domain search path. This setting is only evaluated if the network is required during the installation.
install=SOURCE
Specify the location of the installation source to use. Possible
protocols are cd, hd,
slp, nfs, smb
(Samba/CIFS), ftp, tftp
http, and https. Not all source
types are available on all platforms. For example z Systems does not
support cd and hd.
The default option is cd.
If an ftp, tftp or
smb URL is given, specify the user name and password
with the URL. These parameters are optional and anonymous or guest login
is assumed if they are not given. Example:
install=ftp://USER:PASSWORD@SERVER/DIRECTORY/DVD1/
If you want to install over an encrypted connection, use an
https URL. If the certificate cannot be verified, use
the sslcerts=0 boot option to disable certificate
checking.
In case of a Samba or CIFS installation, you can also specify the domain that should be used:
install=smb://WORKDOMAIN;USER:PASSWORD@SERVER/DIRECTORY/DVD1/
To use cd, hd or slp,
set them like in the following example:
install=cd:/ install=hd:/?device=sda/PATH_TO_ISO install=slp:/
Only one of the different remote control methods should be specified at a time. The different methods are: SSH, VNC, remote X server.
display_ip=IP_ADDRESS
Display_IP causes the installing system to
try to connect to an X server at the given address.
The direct installation with the X Window System relies on a primitive authentication mechanism based on host names. This mechanism is disabled on current openSUSE Leap versions. Installation with SSH or VNC is preferred.
vnc=1Enables a VNC server during the installation.
vncpassword=PASSWORDSets the password for the VNC server.
ssh=1
ssh enables SSH installation.
ssh.password=PASSWORDSpecifies a SSH password for the root user during installation.
To configure access to a local SMT or
supportconfig server for the installation, you can
specify boot parameters to
set up these services during installation. The same applies if you need IPv6 support
during the installation.
By default you can only assign IPv4 network addresses to your machine. To enable IPv6 during installation, enter one of the following parameters at the boot prompt:
ipv6=1
ipv6only=1
In networks enforcing the usage of a proxy server for accessing remote Web sites, registration during installation is only possible when configuring a proxy server.
To use a proxy during the installation, press F4 on the
boot screen and set the required parameters in the dialog. Alternatively provide the kernel parameter
proxy at the boot prompt:
l>proxy=http://USER:PASSWORD@proxy.example.com:PORT
Specifying USER and
PASSWORD is optional—if the server allows
anonymous access, the following data is sufficient:
http://proxy.example.com:PORT.
Enabling SELinux upon installation start-up enables you to configure it after the installation has been finished without having to reboot. Use the following parameters:
security=selinux selinux=1
During installation and upgrade, YaST can update itself as described
in Section 3.1, “Installer Self-Update” to solve potential bugs
discovered after release. The self_update parameter can
be used to modify the behavior of this feature.
To enable the installer self-update, set the parameter to
1:
self_update=1
To use a user-defined repository, specify a URL:
self_update=https://updates.example.com/
You can find more information about boot parameters in the openSUSE wiki at https://en.opensuse.org/SDB:Linuxrc#Parameter_Reference.
This chapter describes the procedure in which the data for openSUSE Leap is copied to the target device. Some basic configuration parameters for the newly installed system are set during the procedure. A graphical user interface will guide you through the installation.
If you are a first-time user of openSUSE Leap, you might want to follow the default YaST proposals in most parts, but you can also adjust the settings as described here to fine-tune your system according to your preferences. Help for each installation step is provided by clicking .
If the installer does not detect your mouse correctly, use →| for navigation, arrow keys to scroll, and Enter to confirm a selection. Various buttons or selection fields contain a letter with an underscore. Use Alt–Letter to select a button or a selection directly instead of navigating there with →|.
During the installation and upgrade process, YaST is able to update itself
to solve bugs in the installer that were discovered after the release. This
functionality is disabled by default; to enable it, set
the boot parameter self_update to 1. For more information, see
Section 2.4.4, “Enabling the Installer Self-Update”.
Although this feature was designed to run without user intervention, it is worth knowing how it works. If you are not interested, you can jump directly to Section 3.2, “Language, Keyboard and License Agreement” and skip the rest of this section.
The installer self-update is executed before the language selection step. This means that progress and errors which happen during this process are displayed in English by default.
To use another language for this part of the installer, press
F2 in the DVD boot menu and select the language from the
list. Alternatively, use the language boot parameter
(for example, language=de_DE).
The process can be broken down into two different parts:
Determine the update repository location.
Download and apply the updates to the installation system.
Installer Self-Updates are distributed as regular RPM packages via a dedicated repository, so the first step is to find out the repository URL.
No matter which of the following options you use, only the installer self-update repository URL is expected, for example:
self_update=https://www.example.com/my_installer_updates/
Do not supply any other repository URL—for example the URL of the software update repository.
YaST will try the following sources of information:
The self_update boot parameter. (For more details,
see Section 2.4.4, “Enabling the Installer Self-Update”.) If you
specify a URL, it will take precedence over any other method.
The /general/self_update_url profile element in case
you are using AutoYaST.
If none of the previous attempts worked, the fallback URL (defined in the installation media) will be used.
When the updates repository is determined, YaST will check whether an update is available. If so, all the updates will be downloaded and applied to the installation system.
Finally, YaST will be restarted to load the new version and the welcome screen will be shown. If no updates were available, the installation will continue without restarting YaST.
Update signatures will be checked to ensure integrity and authorship. If a signature is missing or invalid, you will be asked whether you want to apply the update.
To download installer updates, YaST needs network access. By default, it tries to use DHCP on all network interfaces. If there is a DHCP server in the network, it will work automatically.
If you need a static IP setup, you can use the ifcfg
boot argument. For more details, see the linuxrc documentation at
https://en.opensuse.org/Linuxrc.
YaST can use a user-defined repository instead of the official one by
specifying a URL through the self_update boot option.
However, the following points should be considered:
Only HTTP/HTTPS and FTP repositories are supported.
Only RPM-MD repositories are supported (required by SMT).
Packages are not installed in the usual way: They are uncompressed only and no scripts are executed.
No dependency checks are performed. Packages are installed in alphabetical order.
Files from the packages override the files from the original installation media. This means that the update packages might not need to contain all files, only files that have changed. Unchanged files are omitted to save memory and download bandwidth.
Currently, it is not possible to use more than one repository as source for installer self-updates.
Start the installation of openSUSE Leap by choosing your language. Changing the language will automatically preselect a corresponding keyboard layout. Override this proposal by selecting a different keyboard layout from the drop-down box. The language selected here is also used to assume a time zone for the system clock. This setting can be modified later in the installed system as described in Chapter 6, Changing Language and Country Settings with YaST.
Read the license agreement that is displayed beneath the language and keyboard selection thoroughly. Use to access translations. By proceeding with , you agree to the license agreement. Choose to cancel the installation if you do not agree to the license terms.
After booting into the installation, the installation routine is set up. During this setup, an attempt to configure at least one network interface with DHCP is made. In case this attempt fails, the dialog launches. Choose a network interface from the list and click to change its settings. Use the tabs to configure DNS and routing. See Section 13.4, “Configuring a Network Connection with YaST” for more details.
If at least one network interface is configured via linuxrc, automatic DHCP configuration is disabled and configuration from linuxrc is imported and used.
To access a SAN or a local RAID during the installation, you can use the libstorage command line client for this purpose:
Switch to a console with Ctrl–Alt–F2.
Install the libstoragemgmt extension by running extend
libstoragemgmt.
Now you have access to the lsmcli command. For more
information, run lsmcli --help.
To return to the installer, press Alt–F7
Supported are Netapp Ontap, all SMI-S compatible SAN providers, and LSI MegaRAID.
openSUSE Leap supports a broad range of features. To simplify the installation, YaST offers predefined use cases which adjust the system to be installed so it is tailored for the selected scenario. Currently this affects the package set and the suggested partitioning scheme.
Choose the that meets your requirements best:
Select this scenario when installing on a “real” machine or a fully virtualized guest.
Select this scenario when installing on a machine that should serve as a KVM host that can run other virtual machines.
Select this scenario when installing on a machine that should serve as a Xen host that can run other virtual machines.
Define a partition setup for openSUSE Leap in this step. The installer creates a proposal for one of the available disks containing a root partition formatted with Btrfs, a swap partition, and a home partition formatted with XFS. On hard disks smaller than 20 GB the proposal does not include a separate home partition. If one or more swap partitions have been detected on the available hard disks, these partitions will be used. You have several options to proceed:
To accept the proposal without any changes, click to proceed with the installation workflow.
To adjust the proposal, choose . In the screen, you can enable Logical Volume Management (LVM) and activate disk encryption.
Confirm with and specify the . You can adjust the file system for the root partition and create a separate home partition.
If the root file system format is Btrfs, you can also enable or disable Btrfs snapshots here.
To create a custom partition setup click . Select either if you want start with the suggested disk layout, or to ignore the suggested layout and start with the existing layout on the disk. You can , , , or partitions.
You can also set up Logical Volumes (LVM), configure software RAID and device mapping (DM), encrypt Partitions, mount NFS shares and manage tmpfs volumes with the Expert Partitioner. To fine-tune settings such as the subvolume and snapshot handling for each Btrfs partition, choose . For more information about custom partitioning and configuring advanced features, refer to Section 5.1, “Using the YaST Partitioner”.
A UEFI machine requires an EFI system partition
that must be mounted to /boot/efi. This partition
must be formatted with the FAT file system.
If an EFI system partition is already present on your system (for
example from a previous Windows installation) use it by mounting it to
/boot/efi without formatting it.
openSUSE Leap can be configured to support snapshots which provide the ability to do rollbacks of system changes. openSUSE Leap uses Snapper in conjunction with Btrfs for this feature. Btrfs needs to be set up with snapshots enabled for the root partition. Refer to Chapter 3, System Recovery and Snapshot Management with Snapper for details on Snapper.
Being able to create system snapshots that enable rollbacks requires most
of the system directories to be mounted on a single partition. Refer to
Section 3.1, “Default Setup” for more information. This also
includes /usr and /var. Only
directories that are excluded from snapshots (see Section 3.1.2, “Directories That Are Excluded from Snapshots” for a list) may reside on separate
partitions. Among others, this list includes
/usr/local, /var/log, and
/tmp.
If you do not plan to use Snapper for system rollbacks, the partitioning restrictions mentioned above do not apply.
The default partitioning setup suggests the root partition as Btrfs. To encrypt the root partition, make sure to use the GPT partition table type instead of the default MSDOS type. Otherwise the GRUB2 boot loader may not have enough space for the second stage loader.
Installing to and booting from existing software RAID volumes is supported for Disk Data Format (DDF) volumes and Intel Matrix Storage Manager (IMSM) volumes. IMSM is also known by the following names:
Intel Rapid Storage Technology
Intel Matrix Storage Technology
Intel Application Accelerator / Intel Application Accelerator RAID Edition
FCoE and iSCSI devices will appear asynchronously during the
boot process. While the initrd guarantees that those devices are
set up correctly for the root file system, there are no such
guarantees for any other file systems or mount points like
/usr. Hence any system mount points like
/usr or /var are not
supported. To use those devices, ensure correct
synchronization of the respective services and devices.
In case the disk selected for the suggested partitioning proposal contains a large Windows FAT or NTFS partition, it will automatically be resized to make room for the openSUSE Leap installation. To avoid data loss it is strongly recommended to
make sure the partition is not fragmented (run a defragmentation program from Windows prior to the openSUSE Leap installation)
double-check the suggested size for the Windows partition is big enough
back up your data prior to the openSUSE Leap installation
To adjust the proposed size of the Windows partition, use the .
In this dialog, select your region and time zone. Both are preselected according to the installation language. To change the preselected values, either use the map or the drop-down boxes for and . When using the map, point the cursor at the rough direction of your region and left-click to zoom. Now choose your country or region by left-clicking. Right-click to return to the world map.
To set up the clock, choose whether the . If you run another operating system on your machine, such as Microsoft Windows, it is likely your system uses local time instead. If you run Linux on your machine, set the hardware clock to UTC and have the switch from standard time to daylight saving time performed automatically.
The switch from standard time to daylight saving time (and vice versa) can only be performed automatically when the hardware clock (CMOS clock) is set to UTC. This also applies if you use automatic time synchronization with NTP, because automatic synchronization will only be performed if the time difference between the hardware and system clock is less than 15 minutes.
Since a wrong system time can cause serious problems (missed backups, dropped mail messages, mount failures on remote file systems, etc.), it is strongly recommended to always set the hardware clock to UTC.
If a network is already configured, you can configure time synchronization with an NTP server. Click to either alter the NTP settings or to set the time. See Chapter 18, Time Synchronization with NTP for more information on configuring the NTP service. When finished, click to continue the installation.
If running without NTP configured, consider setting
SYSTOHC=no (sysconfig variable) to
avoid saving unsynchronized time into the hardware clock.
Select the desktop system you would like to use in the dialog. and are among the most widely used desktops on Linux.
If setting up a server, you probably do not need a graphical user interface. Choose in this case.
More desktop systems, such as XFCE, LXDE, MATE, and Enlightenment are available after having enabled the online repositories. Doing so is also recommended if you want to get the latest security updates and fixes during the installation. A working Internet connection is required. To install a user interface, choose . You have the following choices:
The contains open source software (OSS). Compared to the DVD installation media, it contains many additional software packages, among them the above mentioned desktop systems. Choose this repository to install them.
The contains security updates and fixes for packages from the and the DVD installation media. Choosing this repository is recommended for all installation scenarios.
The contains packages with a proprietary software license. Choosing it is not required for installing a custom desktop system.
Choosing is recommended when also having chosen the . It contains the respective updates and security fixes.
All other repositories are intended for experienced users and developers. Click on a repository name to get more information.
Confirm your selection with . Depending on your choice, you need to confirm one or more license agreements. Do so by choosing until you return to the screen. Now choose and , to proceed to the , where you can choose a custom desktop system from the left-hand pane.
Create a local user in this step. After entering the first name and last
name, either accept the proposal or specify a new
that will be used to log in. Only
use lowercase letters (a-z), digits (0-9) and the characters
. (dot), - (hyphen) and
_ (underscore). Special characters, umlauts and accented
characters are not allowed.
Finally, enter a password for the user. Re-enter it for confirmation (to ensure that you did not type something else by mistake). To provide effective security, a password should be at least six characters long and consist of uppercase and lowercase letters, number and special characters (7-bit ASCII). Umlauts or accented characters are not allowed. Passwords you enter are checked for weakness. When entering a password that is easy to guess (such as a dictionary word or a name) you will see a warning. It is a good security practice to use strong passwords.
Remember both your user name and the password because they are needed each time you log in to the system.
If you install openSUSE Leap on a machine with one or more existing Linux installations, YaST allows you to import user data such as user names and passwords. Select and then for import.
If you do not want to configure any local users (for example when setting up a client on a network with centralized user authentication), skip this step by choosing and confirming the warning. Network user authentication can be configured at any time later in the installed system; refer to Chapter 5, Managing Users with YaST for instructions.
Two additional options are available:
If checked, the same password you have entered for the user will be used
for the system administrator root. This option is suitable for
stand-alone workstations or machines in a home network that are
administrated by a single user. When not checked, you are prompted for a
system administrator password in the next step of the installation
workflow (see Section 3.9, “Password for the System Administrator root”).
This option automatically logs the current user in to the system when it starts. This is mainly useful if the computer is operated by only one user.
With the automatic login enabled, the system boots straight into your desktop with no authentication. If you store sensitive data on your system, you should not enable this option if the computer can also be accessed by others.
If you install on a system where a previous Linux installation was found, you may . Click for a list of available user accounts. Select one or more user.
In an environment where users are centrally managed (for example by NIS or LDAP) you may want to skip the creation of local users. Select in this case.
root #
If you have not chosen in the previous step, you will be prompted to enter
a password for the System Administrator root. Otherwise this
configuration step is skipped.
root is the name of the superuser, or the administrator of the system.
Unlike regular users, root has unlimited
rights to change the system configuration, install programs, and set up new
hardware. If users forget their passwords or have other problems with the
system, root can help. The root account should only be used for
system administration, maintenance, and repair. Logging in as root for
daily work is rather risky: a single mistake could lead to irretrievable
loss of system files.
For verification purposes, the password for root must be entered
twice. Do not forget the root password. After having been entered,
this password cannot be retrieved.
root #It is recommended to only use characters that are available on an English keyboard. In case of a system error or when you need to start your system in rescue mode a localized keyboard might not be available.
The root password can be changed any time later in the installed
system. To do so run YaST and start › .
root User
The user root has all the permissions needed to make changes to the
system. To carry out such tasks, the root password is required. You
cannot carry out any administrative tasks without this password.
On the last step before the real installation takes place, you can alter installation settings suggested by the installer. To modify the suggestions, click the respective headline. After having made changes to a particular setting, you are always returned to the Installation Settings window, which is updated accordingly.
openSUSE Leap contains several software patterns for various application purposes. Click to open the screen where you can modify the pattern selection according to your needs. Select a pattern from the list and see a description in the right-hand part of the window. Each pattern contains several software packages needed for specific functions (for example Multimedia or Office software). For a more detailed selection based on software packages to install, select to switch to the YaST Software Manager.
You can also install additional software packages or remove software packages from your system at any later time with the YaST Software Manager. For more information, refer to Chapter 11, Installing or Removing Software.
The language you selected with the first step of the installation will be used as the primary (default) language for the system. You can add secondary languages from within the dialog by choosing › › .
The installer proposes a boot configuration for your system. Other operating systems found on your computer, such as Microsoft Windows or other Linux installations, will automatically be detected and added to the boot loader. However, openSUSE Leap will be booted by default. Normally, you can leave these settings unchanged. If you need a custom setup, modify the proposal according to your needs. For information, see Section 12.3, “Configuring the Boot Loader with YaST”.
Booting a configuration where /boot resides on a
software RAID 1 device is supported, but it requires to install the boot
loader into the MBR ( › ). Having
/boot on software RAID devices with a level other
than RAID 1 is not supported.
By default SuSEfirewall2 is enabled on all configured network interfaces. To globally disable the firewall for this computer, click (not recommended).
If the firewall is activated, all interfaces are configured to be in the “External Zone”, where all ports are closed by default, ensuring maximum security. The only port you can open during the installation is port 22 (SSH), to allow remote access. All other services requiring network access (such as FTP, Samba, Web server, etc.) will only work after having adjusted the firewall settings. Refer to Chapter 15, Masquerading and Firewalls for more information.
To enable remote access via the secure shell (SSH), make sure the
SSH service is enabled and the SSH
port is open.
If you install openSUSE Leap on a machine with one or more existing Linux installations, the installation routine imports the SSH host key with the most recent access time from an existing installation by default.
If you are performing a remote administration over VNC, you can also specify whether the machine should be accessible via VNC after the installation. Note that enabling VNC also requires you to set the to .
openSUSE Leap can boot into two different targets (formerly known as “runlevels”). The target starts a display manager, whereas the target starts the command line interface.
The default target is . In case you have not installed the patterns, you need to change it to . If the system should be accessible via VNC, you need to choose .
This screen lists all the hardware information the installer could obtain about your computer. When opened for the first time, the hardware detection is started. Depending on your system, this may take some time. Select any item in the list and click to see detailed information about the selected item. Use to save a detailed list to either the local file system or a removable device.
Advanced users can also change the and kernel settings by choosing . A screen with two tabs opens:
Each kernel driver contains a list of device IDs of all devices it supports. If a new device is not in any driver's database, the device is treated as unsupported, even if it can be used with an existing driver. You can add PCI IDs to a device driver here. Only advanced users should attempt to do so.
To add an ID, click and select whether to
enter the data, or whether to choose from a
list. Enter the required data. The is the
directory name from /sys/bus/pci/drivers—if
empty, the name is used as the directory name.
Existing entries can be managed with and
.
Change the here. If is chosen, the default setting for the respective architecture will be used. This setting can also be changed at any time later from the installed system. Refer to Chapter 12, Tuning I/O Performance for details on I/O tuning.
Also activate the here. These keys will let you issue basic commands (such as rebooting the system or writing kernel dumps) in case the system crashes. Enabling these keys is recommended when doing kernel development. Refer to https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html for details.
After configuring all installation settings, click in the Installation Settings window to start the installation. Some software may require a license confirmation. If your software selection includes such software, license confirmation dialogs are displayed. Click to install the software package. When not agreeing to the license, click and the software package will not be installed. In the dialog that follows, confirm with again.
The installation usually takes between 15 and 30 minutes, depending on the system performance and the selected software scope. After having prepared the hard disk and having saved and restored the user settings, the software installation starts. During this procedure a slide show introduces the features of openSUSE Leap. Choose to switch to the installation log or to read important up-to-date information that was not available when the manuals were printed.
After the software installation has completed, the system reboots into the new installation where you can log in. To customize the system configuration or to install additional software packages, start YaST.
If you encounter any problems using the openSUSE Leap installation media, check the integrity of your installation media. Boot from the media and choose from the boot menu. In a running system, start YaST and choose › . To check the openSUSE Leap medium, insert it into the drive and click in the screen of YaST. This may take several minutes. If errors are detected, do not use this medium for installation. Media problems may occur when having burned the medium yourself. Burning the media at a low speed (4x) helps to avoid problems.
If your computer does not contain a bootable DVD-ROM drive or if the one you have is not supported by Linux, there are several options you can install your machine without a built-in DVD drive:
If it is supported by your BIOS and the installation kernel, boot from external DVD drives or USB storage devices. Refer to Section 2.2, “PC (AMD64/Intel 64/ARM AArch64)” for instructions on how to create a bootable USB storage device.
If a machine lacks a DVD drive, but provides a working Ethernet connection, perform a completely network-based installation.
Linux supports most existing DVD drives. If the system has no DVD drive, it is still possible that an external DVD drive, connected through USB, FireWire, or SCSI, can be used to boot the system. This depends mainly on the interaction of the BIOS and the hardware used. Sometimes a BIOS update may help if you encounter problems.
When installing from a Live CD, you can also create a “Live flash disk” to boot from.
One reason a machine does not boot the installation media can be an incorrect boot sequence setting in BIOS. The BIOS boot sequence must have DVD drive set as the first entry for booting. Otherwise the machine would try to boot from another medium, typically the hard disk. Guidance for changing the BIOS boot sequence can be found the documentation provided with your mainboard, or in the following paragraphs.
The BIOS is the software that enables the very basic functions of a computer. Motherboard vendors provide a BIOS specifically made for their hardware. Normally, the BIOS setup can only be accessed at a specific time—when the machine is booting. During this initialization phase, the machine performs several diagnostic hardware tests. One of them is a memory check, indicated by a memory counter. When the counter appears, look for a line, usually below the counter or somewhere at the bottom, mentioning the key to press to access the BIOS setup. Usually the key to press is one of Del, F1, or Esc. Press this key until the BIOS setup screen appears.
Enter the BIOS using the proper key as announced by the boot routines and wait for the BIOS screen to appear.
To change the boot sequence in an AWARD BIOS, look for the entry. Other manufacturers may have a different name for this, such as . When you have found the entry, select it and confirm with Enter.
In the screen that opens, look for a subentry called or . Change the settings by pressing Page ↑ or Page ↓ until the DVD drive is listed first.
Leave the BIOS setup screen by pressing Esc. To save the changes, select , or press F10. To confirm that your settings should be saved, press Y.
Open the setup by pressing Ctrl–A.
Select . The connected hardware components are now displayed.
Make note of the SCSI ID of your DVD drive.
Exit the menu with Esc.
Open . Under , select and press Enter.
Enter the ID of the DVD drive and press Enter again.
Press Esc twice to return to the start screen of the SCSI BIOS.
Exit this screen and confirm with to boot the computer.
Regardless of what language and keyboard layout your final installation will be using, most BIOS configurations use the US keyboard layout as shown in the following figure:
Some hardware types, mainly very old or very recent ones, fail to install. Often this may happen because support for this type of hardware is missing in the installation kernel, or because of certain functionality included in this kernel, such as ACPI, that can still cause problems on some hardware.
If your system fails to install using the standard mode from the first installation boot screen, try the following:
With the DVD still in the drive, reboot the machine with Ctrl–Alt–Del or using the hardware reset button.
When the boot screen appears, press F5, use the arrow keys of your keyboard to navigate to and press Enter to launch the boot and installation process. This option disables the support for ACPI power management techniques.
Proceed with the installation as described in Chapter 3, Installation with YaST.
If this fails, proceed as above, but choose instead. This option disables ACPI and DMA support. Most hardware will boot with this option.
If both of these options fail, use the boot options prompt to pass any
additional parameters needed to support this type of hardware to the
installation kernel. For more information about the parameters available as
boot options, refer to the kernel documentation located in
/usr/src/linux/Documentation/kernel-parameters.txt.
Install the kernel-source
package to view the kernel documentation.
There are other ACPI-related kernel parameters that can be entered at the boot prompt prior to booting for installation:
acpi=off
This parameter disables the complete ACPI subsystem on your computer. This may be useful if your computer cannot handle ACPI or if you think ACPI in your computer causes trouble.
acpi=force
Always enable ACPI even if your computer has an old BIOS dated before
the year 2000. This parameter also enables ACPI if it is set in addition
to acpi=off.
acpi=noirq
Do not use ACPI for IRQ routing.
acpi=ht
Run only enough ACPI to enable hyper-threading.
acpi=strict
Be less tolerant of platforms that are not strictly ACPI specification compliant.
pci=noacpi
Disable PCI IRQ routing of the new ACPI system.
pnpacpi=off
This option is for serial or parallel problems when your BIOS setup contains wrong interrupts or ports.
notsc
Disable the time stamp counter. This option can be used to work around timing problems on your systems. It is a recent feature, if you see regressions on your machine, especially time related or even total hangs, this option is worth a try.
nohz=off
Disable the nohz feature. If your machine hangs, this option may help. Otherwise it is of no use.
Once you have determined the right parameter combination, YaST automatically writes them to the boot loader configuration to make sure that the system boots properly next time.
If inexplicable errors occur when the kernel is loaded or during the installation, select in the boot menu to check the memory. If returns an error, it is usually a hardware error.
After you insert the medium into your drive and reboot your machine, the installation screen comes up, but after you select , the graphical installer does not start.
There are several ways to deal with this situation:
Try to select another screen resolution for the installation dialogs.
Select for installation.
Do a remote installation via VNC using the graphical installer.
Boot for installation.
Press F3 to open a menu from which to select a lower resolution for installation purposes.
Select and proceed with the installation as described in Chapter 3, Installation with YaST.
Boot for installation.
Press F3 and select .
Select and proceed with the installation as described in Chapter 3, Installation with YaST.
Boot for installation.
Enter the following text at the boot options prompt:
vnc=1 vncpassword=SOME_PASSWORD
Replace SOME_PASSWORD with the password to use for VNC installation.
Select then press Enter to start the installation.
Instead of starting right into the graphical installation routine, the system continues to run in a text mode, then halts, displaying a message containing the IP address and port number at which the installer can be reached via a browser interface or a VNC viewer application.
If using a browser to access the installer, launch the browser and enter the address information provided by the installation routines on the future openSUSE Leap machine and press Enter:
http://IP_ADDRESS_OF_MACHINE:5801
A dialog opens in the browser window prompting you for the VNC password. Enter it and proceed with the installation as described in Chapter 3, Installation with YaST.
Installation via VNC works with any browser under any operating system, provided Java support is enabled.
Provide the IP address and password to your VNC viewer when prompted. A window opens, displaying the installation dialogs. Proceed with the installation as usual.
You inserted the medium into the drive, the BIOS routines are finished, but the system does not start with the graphical boot screen. Instead it launches a very minimalist text-based interface. This may happen on any machine not providing sufficient graphics memory for rendering a graphical boot screen.
Although the text boot screen looks minimalist, it provides nearly the same functionality as the graphical one:
Unlike the graphical interface, the different boot options cannot be selected using the cursor keys of your keyboard. The boot menu of the text mode boot screen offers some keywords to enter at the boot prompt. These keywords map to the options offered in the graphical version. Enter your choice and press Enter to launch the boot process.
After selecting a boot option, enter the appropriate keyword at the boot prompt or enter some custom boot options as described in Section 4.4, “Fails to Boot”. To launch the installation process, press Enter.
Use the function keys (F1 ... F12) to determine the screen resolution for installation. If you need to boot in text mode, choose F3.
During installation, you could have created a local user for your system. With the YaST module you can add more users or edit existing ones. It also lets you configure your system to authenticate users with a network server.
Working in different countries or having to work in a multilingual
environment requires your computer to be set up to support this.
openSUSE® Leap can handle different locales in parallel.
A locale is a set of parameters that defines the language and country
settings reflected in the user interface.
YaST allows you to configure hardware items such as audio hardware, your system keyboard layout or printers.
openSUSE® Leap supports printing with many types of printers, including remote network printers. Printers can be configured manually or with YaST. For configuration instructions, refer to Section 7.3, “Setting Up a Printer”. Both graphical and command line utilities are available for starting and ma…
The X Window System (X11) is the de facto standard for graphical user interfaces in Unix. X is network-based, enabling applications started on one host to be displayed on another host connected over any kind of network (LAN or Internet). This chapter provides basic information on the X configuration…
FUSE is the acronym for file system in user space.
This means you can configure and mount a file system as an unprivileged
user. Normally, you need to be
root for this task. FUSE alone is
a kernel module. Combined with plug-ins, it allows you to extend FUSE to
access almost all file systems like remote SSH connections, ISO images, and
more.
During installation, you could have created a local user for your system. With the YaST module you can add more users or edit existing ones. It also lets you configure your system to authenticate users with a network server.
To administer users or groups, start YaST and click › . Alternatively, start the dialog directly by running sudo
yast2 users & from a command line.
Every user is assigned a system-wide user ID (UID). Apart from the users which can log in to your machine, there are also several system users for internal use only. Each user is assigned to one or more groups. Similar to system users, there are also system groups for internal use.
Depending on the set of users you choose to view and modify with, the dialog (local users, network users, system users), the main window shows several tabs. These allow you to execute the following tasks:
From the tab create, modify, delete or temporarily disable user accounts as described in Section 5.2, “Managing User Accounts”. Learn about advanced options like enforcing password policies, using encrypted home directories, or managing disk quotas in Section 5.3, “Additional Options for User Accounts”.
Local users accounts are created according to the settings defined on the tab. Learn how to change the default group assignment, or the default path and access permissions for home directories in Section 5.4, “Changing Default Settings for Local Users”.
Learn how to change the group assignment for individual users in Section 5.5, “Assigning Users to Groups”.
From the tab, you can add, modify or delete existing groups. Refer to Section 5.6, “Managing Groups” for information on how to do this.
When your machine is connected to a network that provides user authentication methods like NIS or LDAP, you can choose between several authentication methods on the tab. For more information, refer to Section 5.7, “Changing the User Authentication Method”.
For user and group management, the dialog provides similar functionality. You can easily switch between the user and group administration view by choosing the appropriate tab at the top of the dialog.
Filter options allow you to define the set of users or groups you want to modify: On the or tab, click to view and edit users or groups according to certain categories, such as or , for example (if you are part of a network which uses LDAP). With › you can also set up and use a custom filter.
Depending on the filter you choose, not all of the following options and functions will be available from the dialog.
YaST offers to create, modify, delete or temporarily disable user accounts. Do not modify user accounts unless you are an experienced user or administrator.
File ownership is bound to the user ID, not to the user name. After a user ID change, the files in the user's home directory are automatically adjusted to reflect this change. However, after an ID change, the user no longer owns the files he created elsewhere in the file system unless the file ownership for those files are manually modified.
In the following, learn how to set up default user accounts. For further options, refer to Section 5.3, “Additional Options for User Accounts”.
Open the YaST dialog and click the tab.
With define the set of users you want to manage. The dialog lists users in the system and the groups the users belong to.
To modify options for an existing user, select an entry and click .
To create a new user account, click .
Enter the appropriate user data on the first tab, such as (which is used for login) and . This data is sufficient to create a new user. If you click now, the system will automatically assign a user ID and set all other values according to the default.
Activate if you want any kind of
system notifications to be delivered to this user's mailbox. This creates
a mail alias for root and the user can read the system mail without
having to first log in as root.
The mails sent by system services are stored in the local mailbox
/var/spool/mail/USERNAME,
where USERNAME is the login name of the
selected user. To read e-mails, you can use the mail
command.
To adjust further details such as the user ID or the path to the user's home directory, do so on the tab.
If you need to relocate the home directory of an existing user, enter the path to the new home directory there and move the contents of the current home directory with . Otherwise, a new home directory is created without any of the existing data.
To force users to regularly change their password or set other password options, switch to and adjust the options. For more details, refer to Section 5.3.2, “Enforcing Password Policies”.
If all options are set according to your wishes, click .
Click to close the administration dialog and to save the changes. A newly added user can now log in to the system using the login name and password you created.
Alternatively, to save all changes without exiting the dialog, click › .
For a new (local) user on a laptop which also needs to integrate into a network environment where this user already has a user ID, it is useful to match the (local) user ID to the ID in the network. This ensures that the file ownership of the files the user creates “offline” is the same as if he had created them directly on the network.
Open the YaST dialog and click the tab.
To temporarily disable a user account without deleting it, select the user from the list and click . Activate . The user cannot log in to your machine until you enable the account again.
To delete a user account, select the user from the list and click . Choose if you also want to delete the user's home directory or if you want to retain the data.
In addition to the settings for a default user account, openSUSE® Leap offers further options, such as options to enforce password policies, use encrypted home directories or define disk quotas for users and groups.
If you use the GNOME desktop environment you can configure Auto Login for a certain user and Passwordless Login for all users. Auto login causes a user to become automatically logged in to the desktop environment on boot. This functionality can only be activated for one user at a time. Login without password allows all users to log in to the system after they have entered their user name in the login manager.
Enabling Auto Login or Passwordless Login on a machine that can be accessed by more than one person is a security risk. Without the need to authenticate, any user can gain access to your system and your data. If your system contains confidential data, do not use this functionality.
to activate auto login or login without password, access these functions in the YaST with › .
On any system with multiple users, it is a good idea to enforce at least basic password security policies. Users should change their passwords regularly and use strong passwords that cannot easily be exploited. For local users, proceed as follows:
Open the YaST dialog and select the tab.
Select the user for which to change the password options and click .
Switch to the tab. The user's last password change is displayed on the tab.
To make the user change his password at next login, activate .
To enforce password rotation, set a and a .
To remind the user to change his password before it expires, set the number of .
To restrict the period of time the user can log in after his password has expired, change the value in .
You can also specify a certain expiration date for the complete account. Enter the in YYYY-MM-DD format. Note that this setting is not password-related but rather applies to the account itself.
For more information about the options and about the default values, click .
Apply your changes with .
To prevent system capacities from being exhausted without notification, system administrators can set up quotas for users or groups. Quotas can be defined for one or more file systems and restrict the amount of disk space that can be used and the number of inodes (index nodes) that can be created there. Inodes are data structures on a file system that store basic information about a regular file, directory, or other file system object. They store all attributes of a file system object (like user and group ownership, read, write, or execute permissions), except file name and contents.
openSUSE Leap allows usage of soft and
hard quotas. Additionally, grace intervals can be
defined that allow users or groups to temporarily violate their quotas by
certain amounts.
Defines a warning level at which users are informed that they are nearing their limit. Administrators will urge the users to clean up and reduce their data on the partition. The soft quota limit is usually lower than the hard quota limit.
Defines the limit at which write requests are denied. When the hard quota is reached, no more data can be stored and applications may crash.
Defines the time between the overflow of the soft quota and a warning being issued. Usually set to a rather low value of one or several hours.
To configure quotas for certain users and groups, you need to enable quota support for the respective partition in the YaST Expert Partitioner first.
In YaST, select › and click to proceed.
In the , select the partition for which to enable quotas and click .
Click and activate . If the quota package is not
already installed, it will be installed once you confirm the respective
message with .
Confirm your changes and leave the .
Make sure the service quotaon is
running by entering the following command:
tux >sudosystemctl status quotaon
It should be marked as being active. If this is not
the case, start it with the command systemctl start
quotaon.
Now you can define soft or hard quotas for specific users or groups and set time periods as grace intervals.
In the YaST , select the user or the group you want to set the quotas for and click .
On the tab, select the entry and click to open the dialog.
From , select the partition to which the quota should apply.
Below , restrict the amount of disk space. Enter the number of 1 KB blocks the user or group may have on this partition. Specify a and a value.
Additionally, you can restrict the number of inodes the user or group may have on the partition. Below , enter a and .
You can only define grace intervals if the user or group has already exceeded the soft limit specified for size or inodes. Otherwise, the time-related text boxes are not activated. Specify the time period for which the user or group is allowed to exceed the limits set above.
Confirm your settings with .
Click to close the administration dialog and save the changes.
Alternatively, to save all changes without exiting the dialog, click › .
openSUSE Leap also ships command-line tools like
repquota or warnquota. System
administrators can use this tools to control the disk usage or send e-mail
notifications to users exceeding their quota. Using
quota_nld, administrators can also forward kernel
messages about exceeded quotas to D-BUS. For more information, refer to the
repquota, the warnquota
and the quota_nld man page.
When creating new local users, several default settings are used by YaST. These include, for example, the primary group and the secondary groups the user belongs to, or the access permissions of the user's home directory. You can change these default settings to meet your requirements:
Open the YaST dialog and select the tab.
To change the primary group the new users should automatically belong to, select another group from .
To modify the secondary groups for new users, add or change groups in . The group names must be separated by commas.
If you do not want to use
/home/USERNAME as default
path for new users' home directories, modify the .
To change the default permission modes for newly created home directories,
adjust the umask value in . For
more information about umask, refer to Chapter 10, Access Control Lists in Linux
and to the umask man page.
For information about the individual options, click .
Apply your changes with .
Local users are assigned to several groups according to the default settings which you can access from the dialog on the tab. In the following, learn how to modify an individual user's group assignment. If you need to change the default group assignments for new users, refer to Section 5.4, “Changing Default Settings for Local Users”.
Open the YaST dialog and click the tab. It lists users and the groups the users belong to.
Click and switch to the tab.
To change the primary group the user belongs to, click and select the group from the list.
To assign the user additional secondary groups, activate the corresponding check boxes in the list.
Click to apply your changes.
Click to close the administration dialog and save the changes.
Alternatively, to save all changes without exiting the dialog, click › .
With YaST you can also easily add, modify or delete groups.
Open the YaST dialog and click the tab.
With define the set of groups you want to manage. The dialog lists groups in the system.
To create a new group, click .
To modify an existing group, select the group and click .
In the following dialog, enter or change the data. The list on the right shows an overview of all available users and system users which can be members of the group.
To add existing users to a new group select them from the list of possible by checking the corresponding box. To remove them from the group deactivate the box.
Click to apply your changes.
Click to close the administration dialog and save the changes.
Alternatively, to save all changes without exiting the dialog, click › .
To delete a group, it must not contain any group members. To delete a group, select it from the list and click . Click to close the administration dialog and save the changes. Alternatively, to save all changes without exiting the dialog, click › .
When your machine is connected to a network, you can change the authentication method. The following options are available:
Users are administered centrally on a NIS server for all systems in the network. For details, see Chapter 3, Using NIS.
The System Security Services Daemon (SSSD) can locally cache user data and then allow users to use the data, even if the real directory service is (temporarily) unreachable. For details, see Section 4.3, “SSSD”.
SMB authentication is often used in mixed Linux and Windows networks. For details, see Chapter 21, Samba and Chapter 7, Active Directory Support.
To change the authentication method, proceed as follows:
Open the dialog in YaST.
Click the tab to show an overview of the available authentication methods and the current settings.
To change the authentication method, click and select the authentication method you want to modify. This takes you directly to the client configuration modules in YaST. For information about the configuration of the appropriate client, refer to the following sections:
NIS: Section 3.2, “Configuring NIS Clients”
LDAP: Section 4.2, “Configuring an Authentication Client with YaST”
Samba: Section 21.5.1, “Configuring a Samba Client with YaST”
After accepting the configuration, return to the overview.
Click to close the administration dialog.
By default, openSUSE Leap creates user names which cannot be deleted. These users are typically defined in the Linux Standard Base. The following list provides the common user names and their purpose:
bin, daemonLegacy user, included for compatibility with legacy applications. New applications should no longer use this user name.
gdmUsed by GNOME Display Manager (GDM) to provide graphical log-ins and managing local and remote displays.
lpUsed by the Printer daemon for Common Unix Printing System (CUPS).
mail
User reserved for mailer programs like sendmail
or postfix.
manUsed by man to access man pages.
messagebus
Used to access D-Bus (desktop bus), a software bus for inter-process
communication. Daemon is dbus-daemon.
nobodyUser which owns no files and is in no privileged groups. Nowadays, its use is limited as it is recommended by Linux Standard Base to provide a separate user account for each daemon.
nscd
Used by the Name Service Caching Daemon. This daemon is a lookup
service to improve performance with NIS and LDAP.
Daemon is nscd.
polkitd
Used by the PolicyKit Authorization Framework which defines and
handles authorization requests for unprivileged processes.
Daemon is polkitd.
postfixUsed by the Postfix mailer.
pulseUsed by the Pulseaudio sound server.
rootUsed by the system administrator, providing all appropriate privileges.
rpc
Used by the rpcbind command, an RPC
port mapper.
rtkitUses by the rtkit package providing a D-Bus system service for real time scheduling mode.
salt
User for parallel remote execution provided by Salt. Daemon
is named salt-master.
scard
User for communication with smart cards and readers. Daemon is named
pcscd.
srvGeoClueUsed by the GeoClue D-Bus service to provide location information.
sshdUsed by the Secure Shell daemon (SSH) to ensure secured and encrypted communication over an insecure network.
statd
Used by the Network Status Monitor protocol (NSM), implemented in the
rpc.statd daemon, to listen
for reboot notifications.
systemd-coredump
Used by the /usr/lib/systemd/systemd-coredump command
to acquire, save and process core dumps.
systemd-network
Used by the /usr/lib/systemd/systemd-networkd command
to manage networks.
systemd-timesync
Used by the /usr/lib/systemd/systemd-timesyncd command
to synchronize the local system clock with a remote Network Time
Protocol (NTP) server.
Working in different countries or having to work in a multilingual
environment requires your computer to be set up to support this.
openSUSE® Leap can handle different locales in parallel.
A locale is a set of parameters that defines the language and country
settings reflected in the user interface.
The main system language was selected during installation and keyboard and time zone settings were adjusted. However, you can install additional languages on your system and determine which of the installed languages should be the default.
For those tasks, use the YaST language module as described in Section 6.1, “Changing the System Language”. Install secondary languages to get optional localization if you need to start applications or desktops in languages other than the primary one.
Apart from that, the YaST timezone module allows you to adjust your country and timezone settings accordingly. It also lets you synchronize your system clock against a time server. For details, refer to Section 6.2, “Changing the Country and Time Settings”.
Depending on how you use your desktop and whether you want to switch the entire system to another language or only the desktop environment itself, there are several ways to do this:
Proceed as described in Section 6.1.1, “Modifying System Languages with YaST” and Section 6.1.2, “Switching the Default System Language” to install additional localized packages with YaST and to set the default language. Changes are effective after the next login. To ensure that the entire system reflects the change, reboot the system or close and restart all running services, applications, and programs.
Provided you have previously installed the desired language packages for your desktop environment with YaST as described below, you can switch the language of your desktop using the desktop's control center. Refer to Section 3.2.2, “Configuring Language Settings” for details. After the X server has been restarted, your entire desktop reflects your new choice of language. Applications not belonging to your desktop framework are not affected by this change and may still appear in the language that was set in YaST.
You can also run a single application in another language (that has already been installed with YaST). To do so, start it from the command line by specifying the language code as described in Section 6.1.3, “Switching Languages for Standard X and GNOME Applications”.
YaST knows two different language categories:
The primary language set in YaST applies to the entire system, including YaST and the desktop environment. This language is used whenever available unless you manually specify another language.
Install secondary languages to make your system multilingual. Languages installed as secondary languages can be selected manually for a specific situation. For example, use a secondary language to start an application in a certain language to do word processing in this language.
Before installing additional languages, determine which of them should be the default system language (primary language).
To access the YaST language module, start YaST and click › .
Alternatively, start the dialog directly by
running sudo yast2 language & from a command line.
When installing additional languages, YaST also allows you to set
different locale settings for the user root, see
Step 4. The option
determines how the locale
variables (LC_*) in the file
/etc/sysconfig/language are set for root. You
can either set them to the same locale as for normal users, keep it
unaffected by any language changes or only set the variable
RC_LC_CTYPE to the same values as for the normal users.
This variable sets the localization for language-specific function calls.
To add additional languages in the YaST language module, select the you want to install.
To make a language the default language, set it as .
Additionally, adapt the keyboard to the new primary language and adjust the time zone, if appropriate.
For advanced keyboard or time zone settings, select › or › in YaST to start the respective dialogs. For more information, refer to Section 7.1, “Setting Up Your System Keyboard Layout” and Section 6.2, “Changing the Country and Time Settings”.
To change language settings specific to the user root, click
.
Set to the desired value. For more information, click .
Decide if you want to for
root or not.
If your locale was not included in the list of primary languages available, try specifying it with . However, some localization may be incomplete.
Confirm your changes in the dialogs with . If you have selected secondary languages, YaST installs the localized software packages for the additional languages.
The system is now multilingual. However, to start an application in a language other than the primary one, you need to set the desired language explicitly as explained in Section 6.1.3, “Switching Languages for Standard X and GNOME Applications”.
To globally switch the default system language, start the YaST language module.
Select the desired new system language as .
If you switch to a different primary language, the localized software packages for the former primary language will be removed from the system. To switch the default system language but keep the former primary language as additional language, add it as by enabling the respective check box.
Adjust the keyboard and time zone options as desired.
Confirm your changes with .
After YaST has applied the changes, restart current X sessions (for example, by logging out and logging in again) to make YaST and the desktop applications reflect your new language settings.
After you have installed the respective language with YaST, you can run a single application in another language.
Start the application from the command line by using the following command:
LANG=LANGUAGE application
For example, to start f-spot in German, run
LANG=de_DE f-spot. For other languages, use the
appropriate language code. Get a list of all language codes available with
the locale -av command.
Using the YaST date and time module, adjust your system date, clock and
time zone information to the area you are working in. To access the YaST
module, start YaST and click › . Alternatively, start the
dialog directly by running
sudo yast2 timezone & from a command line.
First, select a general region, such as . Choose an appropriate country that matches the one you are working in, for example, .
Depending on which operating systems run on your workstation, adjust the hardware clock settings accordingly:
If you run another operating system on your machine, such as Microsoft Windows*, it is likely your system does not use UTC, but local time. In this case, deactivate .
If you only run Linux on your machine, set the hardware clock to UTC and have the switch from standard time to daylight saving time performed automatically.
The switch from standard time to daylight saving time (and vice versa) can only be performed automatically when the hardware clock (CMOS clock) is set to UTC. This also applies if you use automatic time synchronization with NTP, because automatic synchronization will only be performed if the time difference between the hardware and system clock is less than 15 minutes.
Since a wrong system time can cause serious problems (missed backups, dropped mail messages, mount failures on remote file systems, etc.) it is strongly recommended to always set the hardware clock to UTC.
You can change the date and time manually or opt for synchronizing your machine against an NTP server, either permanently or only for adjusting your hardware clock.
In the YaST timezone module, click to set date and time.
Select and enter date and time values.
Confirm your changes.
Click to set date and time.
Select .
Enter the address of an NTP server, if not already populated.
Click to get your system time set correctly.
To use NTP permanently, enable .
With the button, you can open the advanced NTP configuration. For details, see Section 18.1, “Configuring an NTP Client with YaST”.
Confirm your changes.
Graphics card, monitor, mouse and keyboard can be configured with GNOME tools. See Section 3.3, “Hardware” for details.
The YaST module lets you define the default keyboard layout for the system (also used for the console). Users can modify the keyboard layout in their individual X sessions, using the desktop's tools.
Start the YaST dialog by
clicking › in YaST. Alternatively, start the module
from the command line with sudo yast2 keyboard.
Select the desired from the list.
Optionally, you can also define the keyboard repeat rate or keyboard delay rate in the .
Try the selected settings in the text box.
If the result is as expected, confirm your changes and close the dialog.
The settings are written to /etc/sysconfig/keyboard.
YaST detects most sound cards automatically and configures them with the appropriate values. To change the default settings, or to set up a sound card that could not be configured automatically, use the YaST sound module. There, you can also set up additional sound cards or switch their order.
To start the sound module, start YaST and click › .
Alternatively, start the dialog
directly by running yast2 sound & as user root
from a command line.
The dialog shows all sound cards that were detected.
If you have added a new sound card or YaST could not automatically configure an existing sound card, follow the steps below. For configuring a new sound card, you need to know your sound card vendor and model. If in doubt, refer to your sound card documentation for the required information. For a reference list of sound cards supported by ALSA with their corresponding sound modules, see http://www.alsa-project.org/main/index.php/Matrix:Main.
During configuration, you can choose between the following setup options:
You are not required to go through any of the further configuration steps—the sound card is configured automatically. You can set the volume or any options you want to change later.
Allows you to adjust the output volume and play a test sound during the configuration.
For experts only. Allows you to customize all parameters of the sound card.
Only use this option if you know exactly what you are doing. Otherwise leave the parameters untouched and use the normal or the automatic setup options.
Start the YaST sound module.
To configure a detected, but sound card, select the respective entry from the list and click .
To configure a new sound card, click . Select your sound card vendor and model and click .
Choose one of the setup options and click .
If you have chosen , you can now your sound configuration and make adjustments to the volume. You should start at about ten percent volume to avoid damage to your hearing or the speakers.
If all options are set according to your wishes, click .
The dialog shows the newly configured or modified sound card.
To remove a sound card configuration that you no longer need, select the respective entry and click .
Click to save the changes and leave the YaST sound module.
To change the configuration of an individual sound card (for experts only!), select the sound card entry in the dialog and click .
This takes you to the where you can fine-tune several parameters. For more information, click .
To adjust the volume of an already configured sound card or to test the sound card, select the sound card entry in the dialog and click . Select the respective menu item.
The YaST mixer settings provide only basic options. They are intended
for troubleshooting (for example, if the test sound is not audible).
Access the YaST mixer settings from › . For
everyday use and fine-tuning of sound options, use the mixer applet
provided by your desktop or the alsasound command line
tool.
For playback of MIDI files, select › .
When a supported sound card is detected, you can install SoundFonts for playback of MIDI files:
Insert the original driver CD-ROM into your CD or DVD drive.
Select › to copy SF2 SoundFonts™ to your
hard disk. The SoundFonts are saved in the directory
/usr/share/sfbank/creative/.
If you have configured more than one sound card in your system you can
adjust the order of your sound cards. To set a sound card as primary
device, select the sound card in the
and click › . The sound device with index
0 is the default device and thus used by the system and
the applications.
By default, openSUSE Leap uses the PulseAudio sound system. It is an abstraction layer that helps to mix multiple audio streams, bypassing any restrictions the hardware may have. To enable or disable the PulseAudio sound system, click › . If enabled, PulseAudio daemon is used to play sounds. Disable to use something else system-wide.
The volume and configuration of all sound cards are saved when you click
and leave the YaST sound module. The mixer settings
are saved to the file /etc/asound.state. The ALSA
configuration data is appended to the end of the file
/etc/modprobe.d/sound and written to
/etc/sysconfig/sound.
YaST can be used to configure a local printer connected to your machine via USB and to set up printing with network printers. It is also possible to share printers over the network. Further information about printing (general information, technical details, and troubleshooting) is available in Chapter 8, Printer Operation.
In YaST, click › to start the printer module. By default it opens in the view, displaying a list of all printers that are available and configured. This is especially useful when having access to a lot of printers via the network. From here you can also and configure printers.
To be able to print from your system, CUPS must run. In case it is not running, you are asked to start it. Answer with , or you cannot configure printing. In case CUPS is not started at boot time, you will also be asked to enable this feature. It is recommended to say , otherwise CUPS would need to be started manually after each reboot.
Usually a USB printer is automatically detected. There are two possible reasons it is not automatically detected:
The USB printer is switched off.
The communication between printer and computer is not possible. Check the cable and the plugs to make sure that the printer is properly connected. If this is the case, the problem may not be printer-related, but rather a USB-related problem.
Configuring a printer is a three-step process: specify the connection type, choose a driver, and name the print queue for this setup.
For many printer models, several drivers are available. When configuring the
printer, YaST defaults to those marked recommended as a
general rule. Normally it is not necessary to change the driver. However, if
you want a color printer to print only in black and white, you can use a driver that does not support color printing. If you experience performance problems with a PostScript printer
when printing graphics, try to switch from a PostScript driver to a
PCL driver (provided your printer understands PCL).
If no driver for your printer is listed, try to select a generic driver with an appropriate standard language from the list. Refer to your printer's documentation to find out which language (the set of commands controlling the printer) your printer understands. If this does not work, refer to Section 7.3.1.1, “Adding Drivers with YaST” for another possible solution.
A printer is never used directly, but always through a print queue. This ensures that simultaneous jobs can be queued and processed one after the other. Each print queue is assigned to a specific driver, and a printer can have multiple queues. This makes it possible to set up a second queue on a color printer that prints black and white only, for example. Refer to Section 8.1, “The CUPS Workflow” for more information about print queues.
Start the YaST printer module with › .
In the screen click .
If your printer is already listed under Specify the
Connection, proceed with the next step. Otherwise, try to
or start the .
In the text box under Find and Assign a Driver enter
the vendor name and the model name and click .
Choose a driver that matches your printer. It is recommended to choose the driver listed first. If no suitable driver is displayed:
Check your search term
Broaden your search by clicking
Add a driver as described in Section 7.3.1.1, “Adding Drivers with YaST”
Specify the Default paper size.
In the field, enter a unique name for the print queue.
The printer is now configured with the default settings and ready to use. Click to return to the view. The newly configured printer is now visible in the list of printers.
Not all printer drivers available for openSUSE Leap are installed by default. If no suitable driver is available in the dialog when adding a new printer install a driver package containing drivers for your printers:
Start the YaST printer module with › .
In the screen, click .
In the Find and Assign a Driver section, click
.
Choose one or more suitable driver packages from the list. Do not specify the path to a printer description file.
Choose and confirm the package installation.
To directly use these drivers, proceed as described in Procedure 7.3, “Adding a New Printer”.
PostScript printers do not need printer driver software. PostScript printers need only a PostScript Printer Description (PPD) file which matches the particular model. PPD files are provided by the printer manufacturer.
If no suitable PPD file is available in the dialog when adding a PostScript printer install a PPD file for your printer:
Several sources for PPD files are available. It is recommended to first try additional driver packages that are shipped with openSUSE Leap but not installed by default (see below for installation instructions). If these packages do not contain suitable drivers for your printer, get PPD files directly from your printer vendor or from the driver CD of a PostScript printer. For details, see Section 8.8.2, “No Suitable PPD File Available for a PostScript Printer”. Alternatively, find PPD files at http://www.linuxfoundation.org/collaborate/workgroups/openprinting/database/databaseintro, the “OpenPrinting.org printer database”. When downloading PPD files from OpenPrinting, keep in mind that it always shows the latest Linux support status, which is not necessarily met by openSUSE Leap.
Start the YaST printer module with › .
In the screen, click .
In the Find and Assign a Driver section, click
.
Enter the full path to the PPD file into the text box under Make
a Printer Description File Available.
Click to return to the Add New Printer
Configuration screen.
To directly use this PPD file, proceed as described in Procedure 7.3, “Adding a New Printer”.
By editing an existing configuration for a printer you can change basic settings such as connection type and driver. It is also possible to adjust the default settings for paper size, resolution, media source, etc. You can change identifiers of the printer by altering the printer description or location.
Start the YaST printer module with › .
In the screen, choose a local printer configuration from the list and click .
Change the connection type or the driver as described in Procedure 7.3, “Adding a New Printer”. This should only be necessary in case you have problems with the current configuration.
Optionally, make this printer the default by checking .
Adjust the default settings by clicking . To change a setting, expand the list of options
by clicking the relative + sign. Change the default by
clicking an option. Apply your changes with .
Network printers are not detected automatically. They must be configured manually using the YaST printer module. Depending on your network setup, you can print to a print server (CUPS, LPD, SMB, or IPX) or directly to a network printer (preferably via TCP). Access the configuration view for network printing by choosing from the left pane in the YaST printer module.
In a Linux environment CUPS is usually used to print via the network. The simplest setup is to only print via a single CUPS server which can directly be accessed by all clients. Printing via more than one CUPS server requires a running local CUPS daemon that communicates with the remote CUPS servers.
CUPS servers announce their print queues over the network either via the
traditional CUPS browsing protocol or via Bonjour/DNS-SD. Clients need to
be able to browse these lists, so users can select specific printers to
send their print jobs to. To be able to browse network print queues, the
service cups-browsed provided by
the package
cups-filters-cups-browsed must run on all clients that print via CUPS
servers. cups-browsed is started
automatically when configuring network printing with YaST.
In case browsing does not work after having started
cups-browsed, the CUPS server(s)
probably announce the network print queues via Bonjour/DNS-SD. In this
case you need to additionally install the package
avahi and start the associated
service with sudo systemctl start avahi-daemon on all
clients.
Start the YaST printer module with › .
From the left pane, launch the screen.
Check and specify the name or IP address of the server.
Click to make sure you have chosen the correct name or IP address.
Click OK to return to the screen. All printers available via the CUPS server are now listed.
Start the YaST printer module with › .
From the left pane, launch the screen.
Check .
Under General Settings specify which servers to use.
You may accept connections from all networks available or from specific
hosts. If you choose the latter option, you need to specify the host
names or IP addresses.
Confirm by clicking and then when asked to start a local CUPS server. After the server has started YaST will return to the screen. Click to see the printers detected by now. Click this button again, in case more printer are to be available.
If your network offers print services via print servers other than CUPS, start the YaST printer module with › and launch the screen from the left pane. Start the and choose the appropriate . Ask your network administrator for details on configuring a network printer in your environment.
You can configure a USB or SCSI scanner with YaST. The
sane-backends package contains
hardware drivers and other essentials needed to use a scanner. If you own
an HP All-In-One device, see Section 7.4.1, “Configuring an HP All-In-One Device”,
instructions on how to configure a network scanner are available at
Section 7.4.3, “Scanning over the Network”.
Connect your USB or SCSI scanner to your computer and turn it on.
Start YaST and select › . YaST builds the scanner database and tries to detect your scanner model automatically.
If a USB or SCSI scanner is not properly detected, try › .
To activate the scanner select it from the list of detected scanners and click .
Choose your model form the list and click and .
Use › to make sure you have chosen the correct driver.
Leave the configuration screen with .
An HP All-In-One device can be configured with YaST even if it is made available via the network. If you own a USB HP All-In-One device, start configuring as described in Procedure 7.9, “Configuring a USB or SCSI Scanner”. If it is detected properly and the succeeds, it is ready to use.
If your USB device is not properly detected, or your HP All-In-One device is connected to the network, run the HP Device Manager:
Start YaST and select › . YaST loads the scanner database.
Start the HP Device Manager with › and follow the on-screen instructions. After having finished the HP Device Manager, the YaST scanner module automatically restarts the auto detection.
Test it by choosing › .
Leave the configuration screen with .
openSUSE Leap allows the sharing of a scanner over the network. To do so, configure your scanner as follows:
Configure the scanner as described in Section 7.4, “Setting Up a Scanner”.
Choose › .
Enter the host names of the clients (separated by a comma) that should be allowed to use the scanner under › and leave the configuration dialog with .
To use a scanner that is shared over the network, proceed as follows:
Start YaST and select › .
Open the network scanner configuration menu by › .
Enter the host name of the machine the scanner is connected to under ›
Leave with . The network scanner is now listed in the Scanner Configuration window and is ready to use.
openSUSE® Leap supports printing with many types of printers, including remote network printers. Printers can be configured manually or with YaST. For configuration instructions, refer to Section 7.3, “Setting Up a Printer”. Both graphical and command line utilities are available for starting and managing print jobs. If your printer does not work as expected, refer to Section 8.8, “Troubleshooting”.
CUPS (Common Unix Printing System) is the standard print system in openSUSE Leap.
Printers can be distinguished by interface, such as USB or network, and printer language. When buying a printer, make sure that the printer has an interface that is supported (USB, Ethernet, or Wi-Fi) and a suitable printer language. Printers can be categorized on the basis of the following three classes of printer languages:
PostScript is the printer language in which most print jobs in Linux and Unix are generated and processed by the internal print system. If PostScript documents can be processed directly by the printer and do not need to be converted in additional stages in the print system, the number of potential error sources is reduced.
Currently PostScript is being replaced by PDF as the standard print job format. PostScript+PDF printers that can directly print PDF (in addition to PostScript) already exist. For traditional PostScript printers PDF needs to be converted to PostScript in the printing workflow.
In the case of known printer languages, the print system can convert PostScript jobs to the respective printer language with Ghostscript. This processing stage is called interpreting. The best-known languages are PCL (which is mostly used by HP printers and their clones) and ESC/P (which is used by Epson printers). These printer languages are usually supported by Linux and produce an adequate print result. Linux may not be able to address some special printer functions. Except for HP and Epson, there are currently no printer manufacturers who develop Linux drivers and make them available to Linux distributors under an open source license.
These printers do not support any of the common printer languages. They use their own undocumented printer languages, which are subject to change when a new edition of a model is released. Usually only Windows drivers are available for these printers. See Section 8.8.1, “Printers without Standard Printer Language Support” for more information.
Before you buy a new printer, refer to the following sources to check how well the printer you intend to buy is supported:
The OpenPrinting home page with the printer database. The database shows the latest Linux support status. However, a Linux distribution can only integrate the drivers available at production time. Accordingly, a printer currently rated as “perfectly supported” may not have had this status when the latest openSUSE Leap version was released. Thus, the databases may not necessarily indicate the correct status, but only provide an approximation.
The Ghostscript Web page.
/usr/share/doc/packages/ghostscript/catalog.devices
List of built-in Ghostscript drivers.
The user creates a print job. The print job consists of the data to print plus information for the spooler. This includes the name of the printer or the name of the print queue, and optionally, information for the filter, such as printer-specific options.
At least one dedicated print queue exists for every printer. The spooler holds the print job in the queue until the desired printer is ready to receive data. When the printer is ready, the spooler sends the data through the filter and back-end to the printer.
The filter converts the data generated by the application that is printing (usually PostScript or PDF, but also ASCII, JPEG, etc.) into printer-specific data (PostScript, PCL, ESC/P, etc.). The features of the printer are described in the PPD files. A PPD file contains printer-specific options with the parameters needed to enable them on the printer. The filter system makes sure that options selected by the user are enabled.
If you use a PostScript printer, the filter system converts the data into printer-specific PostScript. This does not require a printer driver. If you use a non-PostScript printer, the filter system converts the data into printer-specific data. This requires a printer driver suitable for your printer. The back-end receives the printer-specific data from the filter then passes it to the printer.
There are various possibilities for connecting a printer to the system. The configuration of CUPS does not distinguish between a local printer and a printer connected to the system over the network. For more information about the printer connection, read the article CUPS in a Nutshell at http://en.opensuse.org/SDB:CUPS_in_a_Nutshell.
When connecting the printer to the machine, do not forget that only USB devices can be plugged in or unplugged during operation. To avoid damaging your system or printer, shut down the system before changing any connections that are not USB.
PPD (PostScript printer description) is the computer language that describes the properties, like resolution, and options, such as the availability of a duplex unit. These descriptions are required for using various printer options in CUPS. Without a PPD file, the print data would be forwarded to the printer in a “raw” state, which is usually not desired.
To configure a PostScript printer, the best approach is to get a suitable
PPD file. Many PPD files are available in the packages
manufacturer-PPDs and
OpenPrintingPPDs-postscript. See
Section 8.7.3, “PPD Files in Various Packages” and
Section 8.8.2, “No Suitable PPD File Available for a PostScript Printer”.
New PPD files can be stored in the directory
/usr/share/cups/model/ or added to the print system
with YaST as described in Section 7.3.1.1, “Adding Drivers with YaST”.
Subsequently, the PPD file can be selected during the printer setup.
Be careful if a printer manufacturer wants you to install entire software packages. This kind of installation may result in the loss of the support provided by openSUSE Leap. Also, print commands may work differently and the system may no longer be able to address devices of other manufacturers. For this reason, the installation of manufacturer software is not recommended.
A network printer can support various protocols, some even concurrently. Although most of the supported protocols are standardized, some manufacturers modify the standard. Manufacturers then provide drivers for only a few operating systems. Unfortunately, Linux drivers are rarely provided. The current situation is such that you cannot act on the assumption that every protocol works smoothly in Linux. Therefore, you may need to experiment with various options to achieve a functional configuration.
CUPS supports the socket,
LPD, IPP and
smb protocols.
Socket refers to a connection in which the plain
print data is sent directly to a TCP socket. Some socket port numbers
that are commonly used are 9100 or 35.
The device URI (uniform resource identifier) syntax is:
socket://IP.OF.THE.PRINTER:PORT,
for example: socket://192.168.2.202:9100/.
The LPD protocol is described in RFC 1179. Under this protocol, some
job-related data, such as the ID of the print queue, is sent before the
actual print data is sent. Therefore, a print queue must be specified
when configuring the LPD protocol. The implementations of diverse printer
manufacturers are flexible enough to accept any name as the print queue.
If necessary, the printer manual should indicate what name to use. LPT,
LPT1, LP1 or similar names are often used. The port number for an LPD
service is 515. An example device URI is
lpd://192.168.2.202/LPT1.
IPP is a relatively new protocol (1999) based on the HTTP protocol. With
IPP, more job-related data is transmitted than with the other protocols.
CUPS uses IPP for internal data transmission. The name of the print queue
is necessary to configure IPP correctly. The port number for IPP is
631. Example device URIs are
ipp://192.168.2.202/ps and
ipp://192.168.2.202/printers/ps.
CUPS also supports printing on printers connected to Windows shares. The
protocol used for this purpose is SMB. SMB uses the port numbers
137, 138 and 139.
Example device URIs are
smb://user:password@workgroup/smb.example.com/printer,
smb://user:password@smb.example.com/printer, and
smb://smb.example.com/printer.
The protocol supported by the printer must be determined before
configuration. If the manufacturer does not provide the needed information,
the command nmap (which comes with the
nmap package) can be used to ascertain the
protocol. nmap checks a host for open ports. For example:
tux > nmap -p 35,137-139,515,631,9100-10000 IP.OF.THE.PRINTER
CUPS can be configured with command line tools like
lpinfo, lpadmin and
lpoptions. You need a device URI consisting of a
back-end, such as USB, and parameters. To determine valid device URIs on
your system use the command lpinfo -v | grep ":/":
tux >sudolpinfo -v | grep ":/" direct usb://ACME/FunPrinter%20XL network socket://192.168.2.253
With lpadmin the CUPS server administrator can add,
remove or manage print queues. To add a print queue, use the following
syntax:
tux >sudolpadmin -p QUEUE -v DEVICE-URI -P PPD-FILE -E
Then the device (-v) is available as
QUEUE (-p), using the specified
PPD file (-P). This means that you must know the PPD file
and the device URI to configure the printer manually.
Do not use -E as the first option. For all CUPS commands,
-E as the first argument sets use of an encrypted
connection. To enable the printer, -E must be used as shown
in the following example:
tux >sudolpadmin -p ps -v usb://ACME/FunPrinter%20XL -P \ /usr/share/cups/model/Postscript.ppd.gz -E
The following example configures a network printer:
tux >sudolpadmin -p ps -v socket://192.168.2.202:9100/ -P \ /usr/share/cups/model/Postscript-level1.ppd.gz -E
For more options of lpadmin, see the man page of
lpadmin(8).
During printer setup, certain options are set as default. These options can be modified for every print job (depending on the print tool used). Changing these default options with YaST is also possible. Using command line tools, set default options as follows:
First, list all options:
tux >sudolpoptions -p QUEUE -l
Example:
Resolution/Output Resolution: 150dpi *300dpi 600dpi
The activated default option is identified by a preceding asterisk
(*).
Change the option with lpadmin:
tux >sudolpadmin -p QUEUE -o Resolution=600dpi
Check the new setting:
tux >sudolpoptions -p QUEUE -l Resolution/Output Resolution: 150dpi 300dpi *600dpi
When a normal user runs lpoptions, the settings are
written to ~/.cups/lpoptions. However,
root settings are written to
/etc/cups/lpoptions.
To print from the command line, enter lp -d
QUEUENAME FILENAME,
substituting the corresponding names for
QUEUENAME and
FILENAME.
Some applications rely on the lp command for printing. In
this case, enter the correct command in the application's print dialog,
usually without specifying FILENAME, for example,
lp -d QUEUENAME.
Several CUPS features have been adapted for openSUSE Leap. Some of the most important changes are covered here.
After having performed a default installation of openSUSE Leap,
firewalld is active and the network interfaces are configured to be in
the public zone which blocks incoming traffic. More
information about the firewalld configuration is available in
Section 15.4, “firewalld” and at
http://en.opensuse.org/SDB:CUPS_and_SANE_Firewall_settings.
Normally, a CUPS client runs on a regular workstation located in a trusted
network environment behind a firewall. In this case it is recommended to
configure the network interface to be in the Internal
Zone, so the workstation is reachable from within the network.
If the CUPS server is part of a trusted network environment protected by a
firewall, the network interface should be configured to be in the
Internal Zone of the firewall. It is not recommended to
set up a CUPS server in an untrusted network environment unless you ensure
that it is protected by special firewall rules and secure settings in
the CUPS configuration.
CUPS servers regularly announce the availability and status information of shared printers over the network. Clients can access this information to display a list of available printers in printing dialogs, for example. This is called “browsing”.
CUPS servers announce their print queues over the network either via the
traditional CUPS browsing protocol or via Bonjour/DNS-SD. To be able to
browse network print queues, the service
cups-browsed needs to run on all
clients that print via CUPS servers.
cups-browsed is not started by
default. To start it for the active session, use sudo systemctl
start cups-browsed. To ensure it is automatically started after
booting, enable it with sudo systemctl enable
cups-browsed on all clients.
In case browsing does not work after having started
cups-browsed, the CUPS server(s)
probably announce the network print queues via Bonjour/DNS-SD. In this case
you need to additionally install the package
avahi and start the associated
service with sudo systemctl start avahi-daemon on all
clients.
The YaST printer configuration sets up the queues for CUPS using the PPD
files installed in /usr/share/cups/model. To find the
suitable PPD files for the printer model, YaST compares the vendor and
model determined during hardware detection with the vendors and models in
all PPD files. For this purpose, the YaST printer configuration generates
a database from the vendor and model information extracted from the PPD
files.
The configuration using only PPD files and no other information sources has
the advantage that the PPD files in
/usr/share/cups/model can be modified freely. For
example, if you have PostScript printers the PPD files can be copied
directly to /usr/share/cups/model (if they do not
already exist in the manufacturer-PPDs or
OpenPrintingPPDs-postscript packages) to achieve
an optimum configuration for your printers.
Additional PPD files are provided by the following packages:
gutenprint: the Gutenprint driver and its matching PPDs
splix: the SpliX driver and its matching PPDs
OpenPrintingPPDs-ghostscript: PPDs for Ghostscript built-in drivers
OpenPrintingPPDs-hpijs: PPDs for the HPIJS driver for non-HP printers
The following sections cover some of the most frequently encountered printer hardware and software problems and ways to solve or circumvent these problems. Among the topics covered are GDI printers, PPD files and port configuration. Common network printer problems, defective printouts, and queue handling are also addressed.
These printers do not support any common printer language and can only be addressed with special proprietary control sequences. Therefore they can only work with the operating system versions for which the manufacturer delivers a driver. GDI is a programming interface developed by Microsoft* for graphics devices. Usually the manufacturer delivers drivers only for Windows, and since the Windows driver uses the GDI interface these printers are also called GDI printers. The actual problem is not the programming interface, but that these printers can only be addressed with the proprietary printer language of the respective printer model.
Some GDI printers can be switched to operate either in GDI mode or in one of the standard printer languages. See the manual of the printer whether this is possible. Some models require special Windows software to do the switch (note that the Windows printer driver may always switch the printer back into GDI mode when printing from Windows). For other GDI printers there are extension modules for a standard printer language available.
Some manufacturers provide proprietary drivers for their printers. The disadvantage of proprietary printer drivers is that there is no guarantee that these work with the installed print system or that they are suitable for the various hardware platforms. In contrast, printers that support a standard printer language do not depend on a special print system version or a special hardware platform.
Instead of spending time trying to make a proprietary Linux driver work, it may be more cost-effective to purchase a printer which supports a standard printer language (preferably PostScript). This would solve the driver problem once and for all, eliminating the need to install and configure special driver software and obtain driver updates that may be required because of new developments in the print system.
If the manufacturer-PPDs or
OpenPrintingPPDs-postscript packages do not
contain a suitable PPD file for a PostScript printer, it should be possible
to use the PPD file from the driver CD of the printer manufacturer or
download a suitable PPD file from the Web page of the printer manufacturer.
If the PPD file is provided as a zip archive (.zip) or a self-extracting
zip archive (.exe), unpack it with
unzip. First, review the license terms of the PPD file.
Then use the cupstestppd utility to check if the PPD
file complies with “Adobe PostScript Printer Description File Format
Specification, version 4.3.” If the utility returns
“FAIL,” the errors in the PPD files are serious and are likely
to cause major problems. The problem spots reported by
cupstestppd should be eliminated. If necessary, ask the
printer manufacturer for a suitable PPD file.
Connect the printer directly to the computer. For test purposes, configure the printer as a local printer. If this works, the problems are related to the network.
The TCP/IP network and name resolution must be functional.
lpd
Use the following command to test if a TCP connection can be established
to lpd (port 515) on
HOST:
tux > netcat -z HOST 515 && echo ok || echo failed
If the connection to lpd cannot be established,
lpd may not be active or there may be basic network
problems.
Provided that the respective
lpd is active and the host accepts queries, run the following command as root to query a status report for
QUEUE on remote
HOST:
root # echo -e "\004queue" \
| netcat -w 2 -p 722 HOST 515
If lpd does not respond, it may not be active or
there may be basic network problems. If lpd responds,
the response should show why printing is not possible on the
queue on host. If you receive a
response like that shown in Example 8.1, “Error Message from lpd”, the problem is
caused by the remote lpd.
lpd #lpd: your host does not have line printer access lpd: queue does not exist printer: spooling disabled printer: printing disabled
cupsd
A CUPS network server can broadcast its queues by default every 30
seconds on UDP port 631. Accordingly, the following
command can be used to test whether there is a broadcasting CUPS network
server in the network. Make sure to stop your local CUPS daemon before
executing the command.
tux > netcat -u -l -p 631 & PID=$! ; sleep 40 ; kill $PIDIf a broadcasting CUPS network server exists, the output appears as shown in Example 8.2, “Broadcast from the CUPS Network Server”.
ipp://192.168.2.202:631/printers/queue
The following command can be used to test if a TCP connection can be
established to cupsd (port 631) on
HOST:
tux > netcat -z HOST 631 && echo ok || echo failed
If the connection to cupsd cannot be established,
cupsd may not be active or there may be basic network
problems. lpstat -h HOST
-l -t returns a (possibly very long) status report for all queues on
HOST, provided the respective
cupsd is active and the host accepts queries.
The next command can be used to test if the QUEUE on HOST accepts a print job consisting of a single carriage-return character. Nothing should be printed. Possibly, a blank page may be ejected.
tux > echo -en "\r" \
| lp -d queue -h HOSTSpoolers running in a print server machine sometimes cause problems when they need to deal with multiple print jobs. Since this is caused by the spooler in the print server machine, there no way to resolve this issue. As a work-around, circumvent the spooler in the print server machine by addressing the printer connected to the print server machine directly with the TCP socket. See Section 8.4, “Network Printers”.
In this way, the print server machine is reduced to a converter between the
various forms of data transfer (TCP/IP network and local printer
connection). To use this method, you need to know the TCP port on the
print server machine. If the printer is connected to the print server machine
and turned on, this TCP port can usually be determined with the
nmap utility from the nmap
package some time after the print server machine is powered up. For example,
nmap IP-address may
deliver the following output for a print server machine:
Port State Service 23/tcp open telnet 80/tcp open http 515/tcp open printer 631/tcp open cups 9100/tcp open jetdirect
This output indicates that the printer connected to the print server machine
can be addressed via TCP socket on port 9100. By
default, nmap only checks several commonly known
ports listed in /usr/share/nmap/nmap-services. To
check all possible ports, use the command nmap
-p
FROM_PORT-TO_PORT IP_ADDRESS.
This may take some time. For further information, refer to the man page
of nmap.
Enter a command like
tux > echo -en "\rHello\r\f" | netcat -w 1 IP-address port
cat file | netcat -w 1 IP-address portto send character strings or files directly to the respective port to test if the printer can be addressed on this port.
For the print system, the print job is completed when the CUPS back-end completes the data transfer to the recipient (printer). If further processing on the recipient fails (for example, if the printer is not able to print the printer-specific data) the print system does not notice this. If the printer cannot print the printer-specific data, select a PPD file that is more suitable for the printer.
If the data transfer to the recipient fails entirely after several
attempts, the CUPS back-end, such as USB or
socket, reports an error to the print system (to
cupsd). The back-end determines how many unsuccessful
attempts are appropriate until the data transfer is reported as impossible.
As further attempts would be in vain, cupsd disables
printing for the respective queue. After eliminating the cause of the
problem, the system administrator must re-enable printing with the command
cupsenable.
If a CUPS network server broadcasts its queues to the client hosts via
browsing and a suitable local cupsd is active on the
client hosts, the client cupsd accepts print jobs from
applications and forwards them to the cupsd on the
server. When cupsd on the server accepts a print job, it
is assigned a new job number. Therefore, the job number on the client host
is different from the job number on the server. As a print job is usually
forwarded immediately, it cannot be deleted with the job number on the
client host This is because the client cupsd regards the
print job as completed when it has been forwarded to the server
cupsd.
To delete the print job on the server, use a
command such as lpstat -h cups.example.com -o to determine the
job number on the server. This assumes that the server has not already
completed the print job (that is, sent it completely to the printer). Use
the obtained job
number to delete the print job on the server as follows:
tux > cancel -h cups.example.com QUEUE-JOBNUMBER
If you switch the printer off or shut down the computer during the printing
process, print jobs remain in the queue. Printing resumes when the computer
(or the printer) is switched back on. Defective print jobs must be removed
from the queue with cancel.
If a print job is corrupted or an error occurs in the communication between the host and the printer, the printer cannot process the data correctly and prints numerous sheets of paper with unintelligible characters. To fix the problem, follow these steps:
To stop printing, remove all paper from ink jet printers or open the paper trays of laser printers. High-quality printers have a button for canceling the current printout.
The print job may still be in the queue, because jobs are only removed
after they are sent completely to the printer. Use lpstat
-o or lpstat -h cups.example.com -o to check which
queue is currently printing. Delete the print job with
cancel
QUEUE-JOBNUMBER or
cancel -h cups.example.com
QUEUE-JOBNUMBER.
Some data may still be transferred to the printer even though the print job has been deleted from the queue. Check if a CUPS back-end process is still running for the respective queue and terminate it.
Reset the printer completely by switching it off for some time. Then insert the paper and turn on the printer.
Use the following generic procedure to locate problems in CUPS:
Set LogLevel debug in
/etc/cups/cupsd.conf.
Stop cupsd.
Remove /var/log/cups/error_log* to avoid having to
search through very large log files.
Start cupsd.
Repeat the action that led to the problem.
Check the messages in /var/log/cups/error_log* to
identify the cause of the problem.
In-depth information about printing on SUSE Linux is presented in the openSUSE Support Database at http://en.opensuse.org/Portal:Printing.
The X Window System (X11) is the de facto standard for graphical user interfaces in Unix. X is network-based, enabling applications started on one host to be displayed on another host connected over any kind of network (LAN or Internet). This chapter provides basic information on the X configuration, and background information about the use of fonts in openSUSE® Leap.
Usually, the X Window System needs no configuration. The hardware is
dynamically detected during X start-up. The use of
xorg.conf is therefore deprecated. If you still need to
specify custom options to change the way X behaves, you can still do so by
modifying configuration files under
/etc/X11/xorg.conf.d/.
Fonts in Linux can be categorized into two parts:
Contains a mathematical description as drawing instructions about the shape of a glyph. As such, each glyph can be scaled to arbitrary sizes without loss of quality. Before such a font (or glyph) can be used, the mathematical descriptions need to be transformed into a raster (grid). This process is called font rasterization. Font hinting (embedded inside the font) improves and optimizes the rendering result for a particular size. Rasterization and hinting is done with the FreeType library.
Common formats under Linux are PostScript Type 1 and Type 2, TrueType, and OpenType.
Consists of an array of pixels designed for a specific font size. Bitmap fonts are extremely fast and simple to render. However, compared to vector fonts, bitmap fonts cannot be scaled without losing quality. As such, these fonts are usually distributed in different sizes. These days, bitmap fonts are still used in the Linux console and sometimes in terminals.
Under Linux, Portable Compiled Format (PCF) or Glyph Bitmap Distribution Format (BDF) are the most common formats.
The appearance of these fonts can be influenced by two main aspects:
choosing a suitable font family,
rendering the font with an algorithm that achieves results comfortable for the receiver's eyes.
The last point is only relevant to vector fonts. Although the above two points are highly subjective, some defaults need to be created.
Linux font rendering systems consist of several libraries with different relations. The basic font rendering library is FreeType, which converts font glyphs of supported formats into optimized bitmap glyphs. The rendering process is controlled by an algorithm and its parameters (which may be subject to patent issues).
Every program or library which uses FreeType should consult the Fontconfig library. This library gathers font configuration from users and from the system. When a user amends his Fontconfig setting, this change will result in Fontconfig-aware applications.
More sophisticated OpenType shaping needed for scripts such as Arabic, Han or Phags-Pa and other higher level text processing is done using Harfbuzz or Pango.
To get an overview about which fonts are installed on your system, ask the
commands rpm or fc-list. Both will
give you a good answer, but may return a different list depending on system
and user configuration:
rpm
Invoke rpm to see which software packages containing
fonts are installed on your system:
tux > rpm -qa '*fonts*'
Every font package should satisfy this expression. However, the command
may return some false positives like fonts-config
(which is neither a font nor does it contain fonts).
fc-list
Invoke fc-list to get an overview about what font
families can be accessed, whether they are installed on the system or in
your home:
tux > fc-list ':' familyfc-list
The command fc-list is a wrapper to the Fontconfig
library. It is possible to query a lot of interesting information from
Fontconfig—or, to be more precise, from its cache. See
man 1 fc-list for more details.
If you want to know what an installed font family looks like, either use the
command ftview (package
ft2demos) or visit
http://fontinfo.opensuse.org/. For example, to display
the FreeMono font in 14 point, use ftview like this:
tux > ftview 14 /usr/share/fonts/truetype/FreeMono.ttfIf you need further information, go to http://fontinfo.opensuse.org/ to find out which styles (regular, bold, italic, etc.) and languages are supported.
To query which font is used when a pattern is given, use the
fc-match command.
For example, if your pattern contains an already installed font,
fc-match returns the file name, font family, and the
style:
tux > fc-match 'Liberation Serif'
LiberationSerif-Regular.ttf: "Liberation Serif" "Regular"If the desired font does not exist on your system, Fontconfig's matching rules take place and try to find the most similar fonts available. This means, your request is substituted:
tux > fc-match 'Foo Family'
DejaVuSans.ttf: "DejaVu Sans" "Book"Fontconfig supports aliases: a name is substituted with another family name. A typical example are the generic names such as “sans-serif”, “serif”, and “monospace”. These alias names can be substituted by real family names or even a preference list of family names:
tux > for font in serif sans mono; do fc-match "$font" ; done
DejaVuSerif.ttf: "DejaVu Serif" "Book"
DejaVuSans.ttf: "DejaVu Sans" "Book"
DejaVuSansMono.ttf: "DejaVu Sans Mono" "Book"The result may vary on your system, depending on which fonts are currently installed.
Fontconfig always returns a real family (if at least one is installed) according to the given request, as similar as possible. “Similarity” depends on Fontconfig's internal metrics and on the user's or administrator's Fontconfig settings.
To install a new font there are these major methods:
Manually install the font files such as *.ttf or
*.otf to a known font directory. If it needs to be
system-wide, use the standard directory
/usr/share/fonts. For installation in your home
directory, use ~/.config/fonts.
If you want to deviate from the standard directories, Fontconfig allows
you to choose another one. Let Fontconfig know by using the
<dir> element, see
Section 9.1.5.2, “Diving into Fontconfig XML” for details.
Install fonts using zypper. Lots of fonts are already
available as a package, be it on your SUSE distribution or in the
M17N:fonts
repository. Add the repository to your list using the following command.
For example, to add a repository for SLE 12:
tux >sudozypper ar http://download.opensuse.org/repositories/M17N:/fonts/openSUSE_Leap_42.3/
To search for your FONT_FAMILY_NAME use this command:
tux > zypper se 'FONT_FAMILY_NAME*fonts'Depending on the rendering medium, and font size, the result may be unsatisfactory. For example, an average monitor these days has a resolution of 100dpi which makes pixels too big and glyphs look clunky.
There are several algorithms available to deal with low resolutions, such as anti-aliasing (grayscale smoothing), hinting (fitting to the grid), or subpixel rendering (tripling resolution in one direction). These algorithms can also differ from one font format to another.
Subpixel rendering is not used in SUSE distributions. Although FreeType2 has support for this algorithm, it is covered by several patents expiring at the end of the year 2019. Therefore, setting subpixel rendering options in Fontconfig has no effect unless the system has a FreeType2 library with subpixel rendering compiled in.
Via Fontconfig, it is possible to select a rendering algorithms for every font individually or for a set of fonts.
sysconfig #
openSUSE Leap comes with a sysconfig layer above
Fontconfig. This is a good starting point for experimenting with font
configuration. To change the default settings, edit the configuration file
/etc/sysconfig/fonts-config. (or use the YaST
sysconfig module). After you have edited the file, run
fonts-config:
tux >sudo/usr/sbin/fonts-config
Restart the application to make the effect visible. Keep in mind the following issues:
A few applications do need not to be restarted. For example, Firefox re-reads Fontconfig configuration from time to time. Newly created or reloaded tabs get new font configurations later.
The fonts-config script is called automatically after
every package installation or removal (if not, it is a bug of the font
software package).
Every sysconfig variable can be temporarily overridden by the
fonts-config command line option. See
fonts-config --help for details.
There are several sysconfig variables which can be altered. See
man 1 fonts-config or the help page of the YaST
sysconfig module. The following variables are examples:
Consider FORCE_HINTSTYLE,
FORCE_AUTOHINT, FORCE_BW,
FORCE_BW_MONOSPACE,
USE_EMBEDDED_BITMAPS and
EMBEDDED_BITMAP_LANGAGES
Use PREFER_SANS_FAMILIES,
PREFER_SERIF_FAMILIES,
PREFER_MONO_FAMILIES and
SEARCH_METRIC_COMPATIBLE
The following list provides some configuration examples, sorted from the “most readable” fonts (more contrast) to “most beautiful” (more smoothed).
Prefer bitmap fonts via the PREFER_*_FAMILIES
variables. Follow the example in the help section for these variables.
Be aware that these fonts are rendered black and white, not smoothed and
that bitmap fonts are available in several sizes only. Consider using
SEARCH_METRIC_COMPATIBLE="no"
to disable metric compatibility-driven family name substitutions.
Scalable fonts rendered without antialiasing can result in a similar outcome to bitmap fonts, while maintaining font scalability. Use well hinted fonts like the Liberation families. Unfortunately, there is a lack of well hinted fonts though. Set the following variable to force this method:
FORCE_BW="yes"
Render monospaced fonts without antialiasing only, otherwise use default settings:
FORCE_BW_MONOSPACE="yes"
All fonts are rendered with antialiasing. Well hinted fonts will be
rendered with the byte code interpreter (BCI) and
the rest with autohinter (hintstyle=hintslight).
Leave all relevant sysconfig variables to the default setting.
Use fonts in CFF format. They can be considered also more readable than
the default TrueType fonts given the current improvements in FreeType2.
Try them out by following the example of
PREFER_*_FAMILIES. Possibly make them more dark and
bold with:
SEARCH_METRIC_COMPATIBLE="no"
as they are rendered by hintstyle=hintslight by
default. Also consider using:
SEARCH_METRIC_COMPATIBLE="no"
Even for a well hinted font, use FreeType2's autohinter. That can lead to thicker, sometimes fuzzier letter shapes with lower contrast. Set the following variable to activate this:
FORCE_AUTOHINTER="yes"
Use FORCE_HINTSTYLE to control the level of hinting.
Fontconfig's configuration format is the eXtensible Markup
Language (XML). These few examples are not a complete reference,
but a brief overview. Details and other inspiration can be found in
man 5 fonts-conf or in
/etc/fonts/conf.d/.
The central Fontconfig configuration file is
/etc/fonts/fonts.conf, which—along other
work—includes the whole /etc/fonts/conf.d/
directory. To customize Fontconfig, there are two places where you can
insert your changes:
System-wide changes.
Edit the file /etc/fonts/local.conf (by default, it
contains an empty fontconfig element).
User-specific changes.
Edit the file ~/.config/fontconfig/fonts.conf.
Place Fontconfig configuration files in the
~/.config/fontconfig/conf.d/ directory.
User-specific changes overwrite any system-wide settings.
The file ~/.fonts.conf is marked as deprecated and
should not be used anymore. Use
~/.config/fontconfig/fonts.conf instead.
Every configuration file needs to have a fontconfig
element. As such, the minimal file looks like this:
<?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd"> <fontconfig> <!-- Insert your changes here --> </fontconfig>
If the default directories are not enough, insert the
dir element with the respective directory:
<dir>/usr/share/fonts2</dir>
Fontconfig searches recursively for fonts.
Font-rendering algorithms can be chosen with following Fontconfig snippet (see Example 9.1, “Specifying Rendering Algorithms”):
<match target="font"> <test name="family"> <string>FAMILY_NAME</string> </test> <edit name="antialias" mode="assign"> <bool>true</bool> </edit> <edit name="hinting" mode="assign"> <bool>true</bool> </edit> <edit name="autohint" mode="assign"> <bool>false</bool> </edit> <edit name="hintstyle" mode="assign"> <const>hintfull</const> </edit> </match>
Various properties of fonts can be tested. For example, the
<test> element can test for the font family (as
shown in the example), size interval, spacing, font format, and others.
When abandoning <test> completely, all
<edit> elements will be applied to every font
(global change).
<alias> <family>Alegreya SC</family> <default> <family>serif</family> </default> </alias>
<alias> <family>serif</family> <prefer> <family>Droid Serif</family> </prefer> </alias>
<alias> <family>serif</family> <accept> <family>STIXGeneral</family> </accept> </alias>
The rules from Example 9.2, “Aliases and Family Name Substitutions” create a prioritized family list (PFL). Depending on the element, different actions are performed:
<default> from
Rule 1
This rule adds a serif family name at the
end of the PFL.
<prefer> from
Rule 2
This rule adds “Droid Serif” just
before the first occurrence of serif in
the PFL, whenever Alegreya SC is in PFL.
<accept> from Rule 3
This rule adds a “STIXGeneral” family name just
after the first occurrence of the serif
family name in the PFL.
Putting this together, when snippets occur in the order Rule 1 - Rule 2 - Rule 3 and the user requests “Alegreya SC”, then the PFL is created as depicted in Table 9.1, “Generating PFL from Fontconfig rules”.
In Fontconfig's metrics, the family name has the highest priority over other patterns, like style, size, etc. Fontconfig checks which family is currently installed on the system. If “Alegreya SC” is installed, then Fontconfig returns it. If not, it asks for “Droid Serif”, etc.
Be careful. When the order of Fontconfig snippets is changed, Fontconfig can return different results, as depicted in Table 9.2, “Results from Generating PFL from Fontconfig Rules with Changed Order”.
Think of the <default> alias as a classification
or inclusion of this group (if not installed). As the example shows,
<default> should always precede the
<prefer> and <accept>
aliases of that group.
<default> classification is not limited to the
generic aliases serif, sans-serif and monospace. See
/usr/share/fontconfig/conf.avail/30-metric-aliases.conf
for a complex example.
The following Fontconfig snippet in
Example 9.3, “Aliases and Family Name Substitutions” creates a
serif group. Every family in this group could substitute
others when a former font is not installed.
<alias> <family>Alegreya SC</family> <default> <family>serif</family> </default> </alias> <alias> <family>Droid Serif</family> <default> <family>serif</family> </default> </alias> <alias> <family>STIXGeneral</family> <default> <family>serif</family> </default> </alias> <alias> <family>serif</family> <accept> <family>Droid Serif</family> <family>STIXGeneral</family> <family>Alegreya SC</family> </accept> </alias>
Priority is given by the order in the <accept>
alias. Similarly, stronger <prefer> aliases can be
used.
Example 9.2, “Aliases and Family Name Substitutions” is expanded by Example 9.4, “Aliases and Family Names Substitutions”.
<alias> <family>serif</family> <accept> <family>Liberation Serif</family> </accept> </alias>
<alias> <family>serif</family> <prefer> <family>DejaVu Serif</family> </prefer> </alias>
The expanded configuration from Example 9.4, “Aliases and Family Names Substitutions” would lead to the following PFL evolution:
|
Order |
Current PFL |
|---|---|
|
Request |
|
|
| |
|
| |
|
| |
|
| |
|
|
In case multiple <accept> declarations for the
same generic name exist, the declaration that is parsed last
“wins”. If possible, do not use
<accept> after
user (/etc/fonts/conf.d/*-user.conf) when creating
a system-wide configuration.
In case multiple <prefer declarations for the same
generic name exist, the declaration that is parsed last
“wins”. If possible, do not use
<prefer>
before user in the system-wide
configuration.
Every <prefer> declaration overwrites
<accept> declarations for the same generic
name. If the administrator wants to allow the user to utilize even
<accept> and not only
<prefer>,the administrator should not use
<prefer> in the system-wide configuration. On
the other hand, as users mostly use <prefer>,
this should not have any detrimental effect. We also see the use of
<prefer> also in system wide configurations.
Install the packages xorg-docs to
get more in-depth information about X11. man 5 xorg.conf
tells you more about the format of the manual configuration (if needed).
More information on the X11 development can be found on the project's home
page at http://www.x.org.
Drivers are found in xf86-video-* packages, for
example xf86-video-nv. Many of the drivers
delivered with these packages are described in detail in the related manual
page. For example, if you use the nv driver, find more
information about this driver in man 4 nv.
Information about third-party drivers should be available in
/usr/share/doc/packages/<package_name>. For
example, the documentation of
x11-video-nvidiaG03 is available
in /usr/share/doc/packages/x11-video-nvidiaG03 after
the package was installed.
FUSE is the acronym for file system in user space.
This means you can configure and mount a file system as an unprivileged
user. Normally, you need to be
root for this task. FUSE alone is
a kernel module. Combined with plug-ins, it allows you to extend FUSE to
access almost all file systems like remote SSH connections, ISO images, and
more.
Before you can use FUSE, you need to install the package
fuse. Depending which file system
you want to use, you need additional plug-ins available as separate
packages. For an overview, see
Section 10.5, “Available FUSE Plug-ins”.
Generally you do not need to configure FUSE. However, it is a good idea to
create a directory where all your mount points are combined. For example,
you can create a directory ~/mounts and insert your
subdirectories for your different file systems there.
NTFS, the New Technology File System, is the default file system of Windows. Since under normal circumstances the unprivileged user cannot mount NTFS block devices using the external FUSE library, the process of mounting a Windows partition described below requires root privileges.
Become root and install the
package ntfs-3g.
Create a directory that is to be used as a mount point, for example
~/mounts/windows.
Find out which Windows partition you need. Use YaST and start the
partitioner module to see which partition belongs to Windows, but do not
modify anything. Alternatively, become root and execute
/sbin/fdisk -l. Look for partitions
with a partition type of HPFS/NTFS.
Mount the partition in read-write mode. Replace the placeholder DEVICE with your respective Windows partition:
tux > ntfs-3g /dev/DEVICE MOUNT POINT
To use your Windows partition in read-only mode, append -o
ro:
tux > ntfs-3g /dev/DEVICE MOUNT POINT -o ro
The command ntfs-3g uses the current user (UID) and
group (GID) to mount the given device. If you want to set the write
permissions to a different user, use the command id
USER to get the output of the UID and GID values. Set it
with:
root # id tux
uid=1000(tux) gid=100(users) groups=100(users),16(dialout),33(video)
ntfs-3g /dev/DEVICE MOUNT POINT -o uid=1000,gid=100Find additional options in the man page.
To unmount the resource, run fusermount -u
MOUNT POINT.
SSH, the secure shell network protocol, can be used to exchange data between two computers using a secure channel. To establish an SSH connection through FUSE, proceed as follows:
Install the package sshfs.
Create a directory that is to be used as a mount point. A good idea is to
use ~/mounts/HOST. Replace
HOST with the name of your remote computer.
Mount the remote file system:
root # sshfs USER:HOST MOUNT POINTEnter your password for the remote computer.
To unmount the resource, run fusermount -u
MOUNT POINT.
To look into an ISO image, you can mount it with the
fuseiso package:
Install the package fuseiso.
Create a directory that is to be used as a mount point, for example
~/mounts/iso.
Mount the ISO image:
root # fuseiso ISO_IMAGE MOUNT POINT
You can only read content from the ISO image, but you can not write back. To
unmount the resource, use fusermount -u
MOUNT POINT.
FUSE is dependent on plug-ins. The following table lists common plug-ins.
|
|
mount FTP servers |
|
|
mount encrypted file systems |
|
|
mounts CD-ROM images with ISO9660 file systems in them |
|
|
mount iPods |
|
|
mount browseable Samba clients or Windows shares |
|
|
mount supported digital cameras through gPhoto |
|
|
mount NTFS volumes (with read and write support) |
|
|
mount Bluetooth devices |
|
|
file system client based on SSH file transfer protocol |
|
|
mount WebDAV file systems |
See the home page http://fuse.sourceforge.net of FUSE for more information.
Use YaST's software management module to search for software components you want to add or remove. YaST resolves all dependencies for you. To install packages not shipped with the installation media, add additional software repositories to your setup and let YaST manage them. Keep your system up-to-date by managing software updates with the update applet.
Add-on products are system extensions. You can install a third party add-on product or a special system extension of openSUSE® Leap (for example, a CD with support for additional languages or a CD with binary drivers). To install a new add-on, start YaST and select › . You can select various types of product media, like CD, FTP, USB mass storage devices (such as USB flash drives or disks) or a local directory. You can also work directly with ISO files. To add an add-on as ISO file media, select then enter the . The is arbitrary.
SUSE offers a continuous stream of software security updates for your product. By default, the update applet is used to keep your system up-to-date. Refer to Section 11.4, “Keeping the System Up-to-date” for further information on the update applet. This chapter covers the alternative tool for updat…
You can upgrade an existing system without completely reinstalling it. There are two types of renewing the system or parts of it: updating individual software packages and upgrading the entire system. Updating individual packages is covered in Chapter 11, Installing or Removing Software and Chapter 13, YaST Online Update. Two ways to upgrade the system are discussed in the following sections— see Section 14.1.3, “Upgrading with YaST” and Section 14.1.4, “Distribution Upgrade with Zypper”.
Use YaST's software management module to search for software components you want to add or remove. YaST resolves all dependencies for you. To install packages not shipped with the installation media, add additional software repositories to your setup and let YaST manage them. Keep your system up-to-date by managing software updates with the update applet.
Change the software collection of your system with the YaST Software Manager. This YaST module is available in two flavors: a graphical variant for X Window and a text-based variant to be used on the command line. The graphical flavor is described here—for details on the text-based YaST, see Chapter 1, YaST in Text Mode.
When installing, updating or removing packages, any changes in the Software Manager are first applied after clicking or . YaST maintains a list with all actions, allowing you to review and modify your changes before applying them to the system.
A local or remote directory containing packages, plus additional information about these packages (package metadata).
A short name for a repository (called Alias within
Zypper and within YaST). It can be
chosen by the user when adding a repository and must be unique.
Each repository provides files describing content of the repository (package names, versions, etc.). These repository description files are downloaded to a local cache that is used by YaST.
Represents a whole product, for example openSUSE® Leap.
A pattern is an installable group of packages dedicated to a certain
purpose. For example, the Laptop pattern
contains all packages that are needed in a mobile computing environment.
Patterns define package dependencies (such as required or recommended
packages) and come with a preselection of packages marked for
installation. This ensures that the most important packages needed for a
certain purpose are available on your system after installation of the
pattern. If necessary, you can manually select or deselect
packages within a pattern.
A package is a compressed file in rpm format that
contains the files for a particular program.
A patch consists of one or more packages and may be applied by means of delta RPMs. It may also introduce dependencies to packages that are not installed yet.
A generic term for product, pattern, package or patch. The most commonly used type of resolvable is a package or a patch.
A delta RPM consists only of the binary diff between two defined versions of a package, and therefore has the smallest download size. Before being installed, the full RPM package is rebuilt on the local machine.
Certain packages are dependent on other packages, such as shared
libraries. In other terms, a package may require other
packages—if the required packages are not available, the package
cannot be installed. In addition to dependencies (package requirements)
that must be fulfilled, some packages recommend other
packages. These recommended packages are only installed if they are
actually available, otherwise they are ignored and the package
recommending them is installed nevertheless.
Start the software manager from the by choosing › .
The YaST software manager can install packages or patterns from all currently enabled repositories. It offers different views and filters to make it easier to find the software you are searching for. The view is the default view of the window. To change view, click and select one of the following entries from the drop-down box. The selected view opens in a new tab.
Lists all patterns available for installation on your system.
Lists all packages sorted by groups such as , , or .
Lists all packages sorted by functionality with groups and subgroups. For example › › .
A filter to list all packages needed to add a new system language.
A filter to list packages by repository. To select more than one repository, hold the Ctrl key while clicking repository names. The “pseudo repository” lists all packages currently installed.
Lets you search for a package according to certain criteria. Enter a search term and press Enter. Refine your search by specifying where to and by changing the . For example, if you do not know the package name but only the name of the application that you are searching for, try including the package in the search process.
If you have already selected packages for installation, update or removal, this view shows the changes that will be applied to your system when you click . To filter for packages with a certain status in this view, activate or deactivate the respective check boxes. Press Shift–F1 for details on the status flags.
To list all packages that do not belong to an active repository, choose › › and then choose › . This is useful, for example, if you have deleted a repository and want to make sure no packages from that repository remain installed.
Certain packages are dependent on other packages, such as shared libraries. On the other hand, some packages cannot coexist with others on the system. If possible, YaST automatically resolves these dependencies or conflicts. If your choice results in a dependency conflict that cannot be automatically solved, you need to solve it manually as described in Section 11.2.4, “Checking Software Dependencies”.
When removing any packages, by default YaST only removes the selected packages. If you want YaST to also remove any other packages that become unneeded after removal of the specified package, select › from the main menu.
Search for packages as described in Section 11.2.1, “Views for Searching Packages or Patterns”.
The packages found are listed in the right pane. To install a package or remove it, right-click it and choose or . If the relevant option is not available, check the package status indicated by the symbol in front of the package name—press Shift–F1 for help.
To apply an action to all packages listed in the right pane, go to the main menu and choose an action from › .
To install a pattern, right-click the pattern name and choose .
It is not possible to remove a pattern per se. Instead, select the packages of a pattern you want to remove and mark them for removal.
To select more packages, repeat the steps mentioned above.
Before applying your changes, you can review or modify them by clicking › . By default, all packages that will change status, are listed.
To revert the status for a package, right-click the package and select one of the following entries: if the package was scheduled to be deleted or updated, or if it was scheduled for installation. To abandon all changes and quit the Software Manager, click and .
When you are finished, click to apply your changes.
In case YaST found dependencies on other packages, a list of packages that have additionally been chosen for installation, update or removal is presented. Click to accept them.
After all selected packages are installed, updated or removed, the YaST Software Manager automatically terminates.
Installing source packages with YaST Software Manager is not possible at
the moment. Use the command line tool zypper for this
purpose. For more information, see
Section 2.1.2.5, “Installing or Downloading Source Packages”.
Instead of updating individual packages, you can also update all installed packages or all packages from a certain repository. When mass updating packages, the following aspects are generally considered:
priorities of the repositories that provide the package,
architecture of the package (for example, AMD64/Intel 64),
version number of the package,
package vendor.
Which of the aspects has the highest importance for choosing the update candidates depends on the respective update option you choose.
To update all installed packages to the latest version, choose › › from the main menu.
All repositories are checked for possible update candidates, using the following policy: YaST first tries to restrict the search to packages with the same architecture and vendor like the installed one. If the search is positive, the “best” update candidate from those is selected according to the process below. However, if no comparable package of the same vendor can be found, the search is expanded to all packages with the same architecture. If still no comparable package can be found, all packages are considered and the “best” update candidate is selected according to the following criteria:
Repository priority: Prefer the package from the repository with the highest priority.
If more than one package results from this selection, choose the one with the “best” architecture (best choice: matching the architecture of the installed one).
If the resulting package has a higher version number than the installed one, the installed package will be updated and replaced with the selected update candidate.
This option tries to avoid changes in architecture and vendor for the installed packages, but under certain circumstances, they are tolerated.
If you choose › › instead, the same criteria apply but any candidate package found is installed unconditionally. Thus, choosing this option might actually lead to downgrading some packages.
To make sure that the packages for a mass update derive from a certain repository:
Choose the repository from which to update as described in Section 11.2.1, “Views for Searching Packages or Patterns” .
On the right hand side of the window, click . This explicitly allows YaST to change the package vendor when replacing the packages.
When you proceed with , all installed packages will be replaced by packages deriving from this repository, if available. This may lead to changes in vendor and architecture and even to downgrading some packages.
To refrain from this, click . Note that you can only cancel this until you click the button.
Before applying your changes, you can review or modify them by clicking › . By default, all packages that will change status, are listed.
If all options are set according to your wishes, confirm your changes with to start the mass update.
Most packages are dependent on other packages. If a package, for example, uses a shared library, it is dependent on the package providing this library. On the other hand, some packages cannot coexist, causing a conflict (for example, you can only install one mail transfer agent: sendmail or postfix). When installing or removing software, the Software Manager makes sure no dependencies or conflicts remain unsolved to ensure system integrity.
In case there exists only one solution to resolve a dependency or a conflict, it is resolved automatically. Multiple solutions always cause a conflict which needs to be resolved manually. If solving a conflict involves a vendor or architecture change, it also needs to be solved manually. When clicking to apply any changes in the Software Manager, you get an overview of all actions triggered by the automatic resolver which you need to confirm.
By default, dependencies are automatically checked. A check is performed every time you change a package status (for example, by marking a package for installation or removal). This is generally useful, but can become exhausting when manually resolving a dependency conflict. To disable this function, go to the main menu and deactivate › . Manually perform a dependency check with › . A consistency check is always performed when you confirm your selection with .
To review a package's dependencies, right-click it and choose . A map showing the dependencies opens. Packages that are already installed are displayed in a green frame.
Unless you are very experienced, follow the suggestions YaST makes when handling package conflicts, otherwise you may not be able to resolve them. Keep in mind that every change you make, potentially triggers other conflicts, so you can easily end up with a steadily increasing number of conflicts. In case this happens, the Software Manager, all your changes and start again.
In addition to the hard dependencies required to run a program (for example a certain library), a package can also have weak dependencies, that add for example extra functionality or translations. These weak dependencies are called package recommendations.
The way package recommendations are handled has slightly changed starting with openSUSE Leap 42.1. Nothing has changed when installing a new package—recommended packages are still installed by default.
Prior to openSUSE Leap 42.1, missing recommendations for already installed
packages were installed automatically. Now these packages will no longer
be installed automatically. To switch to the old default, set
PKGMGR_REEVALUATE_RECOMMENDED="yes" in
/etc/sysconfig/yast2. To install all missing
recommendations for already installed packages, start › and choose › .
To disable the installation of recommended packages when installing new
packages, deactivate › in the
YaST Software Manager. If using the command line tool Zypper to install
packages, use the option --no-recommends.
To install third-party software, add additional software repositories to your system. By default, the product repositories such as openSUSE Leap-DVD 42.3 and a matching update repository are automatically configured. Depending on the initially selected product, an additional repository containing translations, dictionaries, etc. might also be configured.
To manage repositories, start YaST and select › . The dialog opens. Here, you can also manage subscriptions to so-called by changing the at the right corner of the dialog to . A Service in this context is a (RIS) that can offer one or more software repositories. Such a Service can be changed dynamically by its administrator or vendor.
Each repository provides files describing content of the repository (package names, versions, etc.). These repository description files are downloaded to a local cache that is used by YaST. To ensure their integrity, software repositories can be signed with the GPG Key of the repository maintainer. Whenever you add a new repository, YaST offers the ability to import its key.
Before adding external software repositories to your list of repositories, make sure this repository can be trusted. SUSE is not responsible for any problems arising from software installed from third-party software repositories.
You can either add repositories from DVD/CD, removable mass storage devices (such as flash disks), a local directory, an ISO image or a network source.
To add repositories from the dialog in YaST proceed as follows:
Click .
Select one of the options listed in the dialog:
To scan your network for installation servers announcing their services via SLP, select and click .
To add a repository from a removable medium, choose the relevant option and insert the medium or connect the USB device to the machine, respectively. Click to start the installation.
For the majority of repositories, you will be asked to specify the path (or URL) to the media after selecting the respective option and clicking . Specifying a is optional. If none is specified, YaST will use the product name or the URL as repository name.
The option is activated by default. If you deactivate the option, YaST will automatically download the files later, if needed.
Depending on the repository you have added, you may be prompted to import the repository's GPG key or asked to agree to a license.
After confirming these messages, YaST will download and parse the metadata. It will add the repository to the list of .
If needed, adjust the repository as described in Section 11.3.2, “Managing Repository Properties”.
Confirm your changes with to close the configuration dialog.
After having successfully added the repository, the software manager starts and you can install packages from this repository. For details, refer to Chapter 11, Installing or Removing Software.
The overview of the lets you change the following repository properties:
The repository status can either be or . You can only install packages from repositories that are enabled. To turn a repository off temporarily, select it and deactivate . You can also double-click a repository name to toggle its status. To remove a repository completely, click .
When refreshing a repository, its content description (package names, versions, etc.) is downloaded to a local cache that is used by YaST. It is sufficient to do this once for static repositories such as CDs or DVDs, whereas repositories whose content changes often should be refreshed frequently. The easiest way to keep a repository's cache up-to-date is to choose . To do a manual refresh click and select one of the options.
Packages from remote repositories are downloaded before being installed.
By default, they are deleted upon a successful installation. Activating
prevents the deletion of
downloaded packages. The download location is configured in
/etc/zypp/zypp.conf, by default it is
/var/cache/zypp/packages.
The of a repository is a value between
1 and 200, with
1 being the highest priority and
200 the lowest priority. Any new repositories that are
added with YaST get a priority of 99 by default. If
you do not care about a priority value for a certain repository, you can
also set the value to 0 to apply the default priority
to that repository (99). If a package is available in
more than one repository, then the repository with the highest priority
takes precedence. This is useful if you want to avoid downloading
packages unnecessarily from the Internet by giving a local repository
(for example, a DVD) a higher priority.
The repository with the highest priority takes precedence in any case. Therefore, make sure that the update repository always has the highest priority, otherwise you might install an outdated version that will not be updated until the next online update.
To change a repository name or its URL, select it from the list with a single-click and then click .
To ensure their integrity, software repositories can be signed with the GPG Key of the repository maintainer. Whenever you add a new repository, YaST offers to import its key. Verify it as you would do with any other GPG key and make sure it does not change. If you detect a key change, something might be wrong with the repository. Disable the repository as an installation source until you know the cause of the key change.
To manage all imported keys, click in the dialog. Select an entry with the mouse to show the key properties at the bottom of the window. , or keys with a click on the respective buttons.
SUSE offers a continuous stream of software security patches for your product. They can be installed using the YaST Online Update module. It also offers advanced features to customize the patch installation.
The GNOME desktop also provides a tool for installing patches and for installing package updates of packages that are already installed. In contrast to a Patch, a package update is only related to one package and provides a newer version of a package. The GNOME tool lets you install both patches and package updates with a few clicks as described in Section 11.4.2, “Installing Patches and Package Updates”.
Whenever new patches or package updates are available, GNOME shows a notification about this at the bottom of the desktop (or on the locked screen).
Whenever new patches or package updates are available, GNOME shows a notification about this at the bottom of the desktop (or on the locked screen).
To install the patches and updates, click in the notification message. This opens the GNOME
update viewer. Alternatively, open the update viewer from › › or press Alt–F2 and enter
gpk-update-viewer.
All and are preselected. It is strongly recommended to install these patches. can be manually selected by activating the respective check boxes. Get detailed information on a patch or package update by clicking its title.
Click to start the installation. You
will be prompted for the root password.
Enter the root password in the authentication dialog and proceed.
To configure notifications, select › › › and adjust the desired settings.
To configure how often to check for updates or to activate or deactivate repositories, select › › › . The tabs of the configuration dialog let you modify the following settings:
Choose how often a check for updates is performed: , , , or .
Choose how often a check for major upgrades is performed: , , or .
This configuration option is only available on mobile computers. Turned off by default.
This configuration option is only available on mobile computers. Turned off by default.
Lists the repositories that will be checked for available patches and package updates. You can enable or disable certain repositories.
Update Repository Enabled
To make sure that you are notified about any patches that are
security-relevant, keep the Updates repository for
your product enabled.
More options are configurable using gconf-editor:
› .
Add-on products are system extensions. You can install a third party add-on product or a special system extension of openSUSE® Leap (for example, a CD with support for additional languages or a CD with binary drivers). To install a new add-on, start YaST and select › . You can select various types of product media, like CD, FTP, USB mass storage devices (such as USB flash drives or disks) or a local directory. You can also work directly with ISO files. To add an add-on as ISO file media, select then enter the . The is arbitrary.
To install a new add-on, proceed as follows:
In YaST select › to see an overview of already installed add-on products.
To install a new add-on product, click .
From the list of available specify the type matching your repository.
To add a repository from a removable medium, choose the relevant option and insert the medium or connect the USB device to the machine, respectively.
You can choose to now. If the option is unchecked, YaST will automatically download the files later, if needed. Click to proceed.
When adding a repository from the network, enter the data you are prompted for. Continue with .
Depending on the repository you have added, you may be asked if you want to import the GPG key with which it is signed or asked to agree to a license.
After confirming these messages, YaST will download and parse the metadata and add the repository to the list of .
If needed, adjust the repository as described in Section 11.3.2, “Managing Repository Properties” or confirm your changes with to close the configuration dialog.
After having successfully added the repository for the add-on media, the software manager starts and you can install packages. Refer to Chapter 11, Installing or Removing Software for details.
Some hardware needs binary-only drivers to function properly. If you have such hardware, refer to the release notes for more information about availability of binary drivers for your system. To read the release notes, open YaST and select › .
SUSE offers a continuous stream of software security updates for your product. By default, the update applet is used to keep your system up-to-date. Refer to Section 11.4, “Keeping the System Up-to-date” for further information on the update applet. This chapter covers the alternative tool for updating software packages: YaST Online Update.
The current patches for openSUSE® Leap are available from an update software repository, which is automatically configured during the installation. If you have registered your product during the installation, an update repository is already configured. If you have not registered openSUSE Leap, you can do so by starting the in YaST. Alternatively, you can manually add an update repository from a source you trust. To add or remove repositories, start the Repository Manager with › in YaST. Learn more about the Repository Manager in Section 11.3, “Managing Software Repositories and Services”.
SUSE provides updates with different relevance levels:
Fix severe security hazards and should always be installed.
Fix issues that could compromise your computer.
Fix non-security relevant issues or provide enhancements.
To open the YaST dialog, start YaST and
select › . Alternatively, start it from the command
line with yast2 online_update.
The window consists of four sections.
The section on the left lists the available
patches for openSUSE Leap. The patches are sorted by security relevance:
security, recommended, and
optional. You can change the view of the
section by selecting one of the following options
from :
Non-installed patches that apply to packages installed on your system.
Patches that either apply to packages not installed on your system, or patches that have requirements which have already have been fulfilled (because the relevant packages have already been updated from another source).
All patches available for openSUSE Leap.
Each list entry in the section consists of a
symbol and the patch name. For an overview of the possible symbols and their
meaning, press Shift–F1. Actions required by Security and
Recommended patches are automatically preset. These
actions are ,
and .
If you install an up-to-date package from a repository other than the update repository, the requirements of a patch for this package may be fulfilled with this installation. In this case a check mark is displayed in front of the patch summary. The patch will be visible in the list until you mark it for installation. This will in fact not install the patch (because the package already is up-to-date), but mark the patch as having been installed.
Select an entry in the section to view a short at the bottom left corner of the dialog. The upper right section lists the packages included in the selected patch (a patch can consist of several packages). Click an entry in the upper right section to view details about the respective package that is included in the patch.
The YaST Online Update dialog allows you to either install all available patches at once or manually select the desired patches. You may also revert patches that have been applied to the system.
By default, all new patches (except optional ones) that
are currently available for your system are already marked for installation.
They will be applied automatically once you click
or .
If one or multiple patches require a system reboot, you will be notified
about this before the patch installation starts. You can then either decide
to continue with the installation of the selected patches, skip the
installation of all patches that need rebooting and install the rest, or go
back to the manual patch selection.
Start YaST and select › .
To automatically apply all new patches (except optional
ones) that are currently available for your system, press
or .
First modify the selection of patches that you want to apply:
Use the respective filters and views that the interface provides. For details, refer to Section 13.1, “The Online Update Dialog”.
Select or deselect patches according to your needs and wishes by right-clicking the patch and choosing the respective action from the context menu.
Do not deselect any security-related patches without
a very good reason. These patches fix severe security hazards and
prevent your system from being exploited.
Most patches include updates for several packages. If you want to change actions for single packages, right-click a package in the package view and choose an action.
To confirm your selection and apply the selected patches, proceed with or .
After the installation is complete, click to leave the YaST . Your system is now up-to-date.
YaST also offers the possibility to set up an automatic update with daily,
weekly or monthly schedule. To use the respective module, you need to
install the
yast2-online-update-configuration
package first.
By default, updates are downloaded as delta RPMs. Since rebuilding RPM packages from delta RPMs is a memory- and processor-intensive task, certain setups or hardware configurations might require you to disable the use of delta RPMs for the sake of performance.
Some patches, such as kernel updates or packages requiring license agreements, require user interaction, which would cause the automatic update procedure to stop. You can configure to skip patches that require user interaction.
After installation, start YaST and select › .
Alternatively, start the module with
yast2 online_update_configuration from the command
line.
Activate .
Choose the update interval: , , or .
To automatically accept any license agreements, activate .
Select if you want to in case you want the update procedure to proceed fully automatically.
If you select to skip any packages that require interaction, run a manual occasionally to install those patches, too. Otherwise you might miss important patches.
To automatically install all packages recommended by updated packages, activate .
To disable the use of delta RPMs (for performance reasons), deactivate .
To filter the patches by category (such as security or recommended), activate and add the appropriate patch categories from the list. Only patches of the selected categories will be installed. Others will be skipped.
Confirm your configuration with .
The automatic online update does not automatically restart the system afterward. If there are package updates that require a system reboot, you need to do this manually.
You can upgrade an existing system without completely reinstalling it. There are two types of renewing the system or parts of it: updating individual software packages and upgrading the entire system. Updating individual packages is covered in Chapter 11, Installing or Removing Software and Chapter 13, YaST Online Update. Two ways to upgrade the system are discussed in the following sections— see Section 14.1.3, “Upgrading with YaST” and Section 14.1.4, “Distribution Upgrade with Zypper”.
openSUSE Leap 42.3 is only available as 64-bit version. Upgrading 32-bit installations to 64-bit is not supported. Please follow the instructions in Chapter 1, Installation Quick Start and Chapter 3, Installation with YaST to install openSUSE Leap on your computer or consider switching to openSUSE Tumbleweed.
The release notes are bundled in the installer, and you may also read them online at openSUSE Leap Release Notes.
Software tends to “grow” from version to version. Therefore,
take a look at the available partition space with df
before updating. If you suspect you are running short of disk space,
secure your data before you update and repartition your system. There is
no general rule regarding how much space each partition should have.
Space requirements depend on your particular partitioning profile, the
software selected, and the version numbers of the system.
Before upgrading, copy the old configuration files to a separate medium
(such as removable hard disk or USB flash drive) to secure the data.
This primarily applies to files stored in /etc as
well as some of the directories and files in /var.
You may also want to write the user data in /home
(the HOME directories) to a backup medium. Back up this
data as root. Only
root has read permission
for all local files.
Before starting your update, make note of the root partition. The
command df / lists the device name of the root
partition. In Example 14.1, “List with df -h”, the root partition
to write down is /dev/sda3 (mounted as
/).
df -h #Filesystem Size Used Avail Use% Mounted on /dev/sda3 74G 22G 53G 29% / udev 252M 124K 252M 1% /dev /dev/sda5 116G 5.8G 111G 5% /home /dev/sda1 39G 1.6G 37G 4% /windows/C /dev/sda2 4.6G 2.6G 2.1G 57% /windows/D
If you upgrade a default system from the previous version to this version, YaST works out the necessary changes and performs them. Depending on your customization, some steps (or the entire upgrade procedure) may fail and you must resort to copying back your backup data. Check the following issues before starting the system update.
Before upgrading the system, make sure that
/etc/passwd and /etc/group do
not contain any syntax errors. For this purpose, start the verification
utilities pwck and grpck as
root to eliminate any
reported errors.
If your machine serves as a VM Host Server for KVM or Xen, make sure to properly shut down all running VM Guests prior to the update. Otherwise you may not be able to access the guests after the update.
Before updating PostgreSQL
(postgres), dump the
databases. See the manual page of pg_dump. This is
only necessary if you actually used PostgreSQL prior to your update.
Following the preparation procedure outlined in Section 14.1.1, “Preparations”, you can now upgrade your system:
Insert the openSUSE Leap DVD into the drive, then reboot the computer to start the installation program. On machines with a traditional BIOS you will see the graphical boot screen shown below. On machines equipped with UEFI, a slightly different boot screen is used. Secure boot on UEFI machines is supported.
Use F2 to change the language for the installer. A corresponding keyboard layout is chosen automatically. See Section 2.2.1, “The Boot Screen on Machines Equipped with Traditional BIOS” or Section 2.2.2, “The Boot Screen on Machines Equipped with UEFI” for more information about changing boot options.
Select on the boot screen, then press Enter. This boots the system and loads the openSUSE Leap installer. Do not select .
The and are initialized with the language settings you have chosen on the boot screen. Change them here, if necessary.
Read the License Agreement. It is presented in the language you have chosen on the boot screen. are available. Proceed with .
YaST determines if there are multiple root partitions. If there is
only one, continue with the next step. If there are several, select
the right partition and confirm with
(/dev/sda3 was selected in the example in
Section 14.1.1, “Preparations”). YaST reads the old
fstab on this partition to analyze and mount the
file systems listed there.
From this point on, the Release Notes can be viewed from any screen during the installation process by selecting .
YaST shows a list of . By default all repositories will get removed. If you had not added any custom repositories, do not change the settings. The packages for the upgrade will be installed from DVD and you can optionally enable the default online repositories can be chosen in the next step.
If you have had added custom repositories, for example from the openSUSE Build Service, you have two choices:
Leave the repository in state Removed. Software
that was installed from this repository will get removed during the
upgrade. Use this method if no version of the repository that matches
the new openSUSE Leap version, is available.
Update and enable the repository. Use this method if a version that matches the new openSUSE Leap version is available for the repository. Change it's URL by clicking the repository in the list and then . Enable the repository afterwards by clicking until it is set to .
Do not use repositories matching the previous version unless you are absolutely sure they will also work with the new openSUSE version. If not, the system may be unstable or not work at all.
In case an Internet connection is available, you may now activate optional online repositories. Please enable all repositories you had enable before to ensure all packages get upgraded correctly. Enabling the update repositories is strongly recommended—this will ensure that you get the latest package versions available, including ll security updates and fixes.
After having proceeded with , you need to confirm the license agreement for the online repositories with .
Use the screen to review and—if necessary—change several proposed installation settings. The current configuration is listed for each setting. To change it, click the headline.
View detailed hardware information by clicking . In the resulting screen you can also change —see Section 3.10.5, “” for more information.
By default, YaST will update perform full based on a selection of patterns. Each pattern contains several software packages needed for specific functions (for example, Web and LAMP server or a print server).
Here you can change the package selection or change the to .
You can further tweak the package selection on the screen. Here you can not only select patterns but also list their contents and search for individual packages. See Chapter 11, Installing or Removing Software for more information.
If you intend to enhance your system, it is recommended to finish the upgrade first and then install additional software.
You also have the possibility to make backups of various system components. Selecting backups slows down the upgrade process. Use this option if you do not have a recent system backup.
This section allows you to change the primary language and configure additional . Optionally, you can adjust the keyboard layout and timezone to the selected primary language.
Here you can change the keyboard layout and adjust additional .
This section shows the boot loader configuration. Changing the defaults is only recommended if really needed. Refer to Chapter 12, The Boot Loader GRUB 2 for details.
After you have finalized the system configuration on the screen, click . Depending on your software selection you may need to agree to license agreements before the installation confirmation screen pops up. Up to this point no changes have been made to your system. After you click a second time, the upgrade process starts.
Once the basic upgrade installation is finished, YaST reboots the system. Finally, YaST updates the remaining software, if any and displays the release notes, if wanted.
With the zypper command line utility you can upgrade to
the next version of the distribution. Most importantly, you can initiate
the system upgrade process from within the running system.
This feature is attractive for advanced users who want to run remote upgrades or upgrades on many similarly configured systems.
To avoid unexpected errors during the upgrade process using
zypper, minimize risky constellations.
Quit as many applications and stop unneeded services as possible and log out all regular users.
Disable third party repositories before starting the upgrade, or lower the priority of these repositories to make sure packages from the default system repositories will get preference. Enable them again after the upgrade and edit their version string to match the version number of the distribution of the upgraded now running system.
Before actually starting the upgrade procedure, check that your system backup is up-to-date and restorable. This is especially important because you need to enter many of the following steps manually.
The program zypper supports long and short command
names. For example, you can abbreviate zypper install
as zypper in. In the following text, the short
variants are used.
Run the online update to make sure the software management stack is up-to-date. For more information, see Chapter 13, YaST Online Update.
Configure the repositories you want to use as update sources. Getting
this right is crucial. Either use YaST (see
Section 11.3, “Managing Software Repositories and Services”) or zypper
(see Section 2.1, “Using Zypper”).
The name of the repositories used in the following steps may vary
depending on your customizations.
To view your current repositories enter:
tux > zypper lr -u
Increase the version number of the system repositories from 42.2 to
42.3leap/. Add the new repositories with
commands such as:
server=http://download.example.orgtux >sudozypper ar $server/distribution/leap/42.3/repo/oss/ Leap-42.3-OSStux >sudozypper ar $server/update/leap/42.3/oss/ Leap-42.3-Update
And remove the old repositories:
zypper rr Leap-42.2-OSS zypper rr Leap-42.2-Update
If necessary, repeat these steps for other repositories to ensure a clean upgrade path for all your packages.
Disable third party repositories or other Open Build Service repositories, because
zypper dup is guaranteed to work with the default
repositories only (replace REPO-ALIAS with
the name of the repository you want to disable):
tux >sudozypper mr -d REPO-ALIAS
Alternatively, you can lower the priority of these repositories.
zypper dup will remove all packages having
unresolved dependencies, but it keeps packages of disabled
repositories as long as their dependencies are satisfied.
zypper dup ensures that all installed packages come
from one of the available repositories. It does not consider the version
or architecture, but prevents changing the vendor of the installed
packages by default, using the --no-allow-vendor-change
option. Packages that are no longer available in the repositories are
considered orphaned. Such packages get uninstalled if their dependencies
cannot be satisfied. If they can be satisfied, such packages stay
installed.
Once done, check your repository configuration with:
tux > zypper lr -d
Refresh local metadata and repository contents with zypper
ref.
Update Zypper and the package management itself with zypper
patch --updatestack-only.
Run the actual distribution upgrade with zypper dup.
You are asked to confirm the license of openSUSE Leap and of some
packages—depending on the set of installed packages.
Reboot the system with shutdown -r now.
Regardless of your overall updated environment, you can always update individual packages. From this point on, however, it is your responsibility to ensure that your system remains consistent.
Use the YaST software management tool to update packages as described in Chapter 11, Installing or Removing Software. Select components from the YaST package selection list according to your needs. If a newer version of a package exists, the version numbers of the installed and the available versions are listed in blue color in the column. If you select a package essential for the overall operation of the system, YaST issues a warning. Such packages should be updated only in the update mode. For example, many packages contain shared libraries. Updating these programs and applications in the running system may lead to system instability.
Problems and special issues of the various versions are published online as they are identified. See the links listed below. Important updates of individual packages can be accessed using the YaST Online Update. For more information, see Chapter 13, YaST Online Update.
Refer to the Product highlights (http://en.opensuse.org/Product_highlights and the
Bugs article in the openSUSE wiki at http://en.opensuse.org/openSUSE:Most_annoying_bugs for
information about recent changes and issues.
When working with Linux, you can communicate with the system almost without ever requiring a command line interpreter (the shell). After booting your Linux system, you are usually directed to a graphical user interface that guides you through the login process and the following interactions with the…
Today, many people use computers with a graphical user interface (GUI) like GNOME. Although they offer lots of features, their use is limited when it comes to the execution of automated tasks. Shells are a good addition to GUIs and this chapter gives you an overview of some aspects of shells, in this case Bash.
When working with Linux, you can communicate with the system almost without ever requiring a command line interpreter (the shell). After booting your Linux system, you are usually directed to a graphical user interface that guides you through the login process and the following interactions with the operating system. The graphical user interface in Linux is initially configured during installation and used by desktop environments such as KDE or GNOME.
Nevertheless, it is useful to have some basic knowledge of working with a shell because you might encounter situations where the graphical user interface is not available. For example, if some problem with the X Window System occurs. If you are not familiar with a shell, you might feel a bit uncomfortable at first when entering commands, but the more you get used to it, the more you will realize that the command line is often the quickest and easiest way to perform some daily tasks.
For Unix or Linux, several shells are available which differ slightly in behavior and in the commands they accept. The default shell in openSUSE® Leap is Bash (GNU Bourne-Again Shell).
The following sections will guide you through your first steps with the Bash shell and will show you how to complete some basic tasks via the command line. If you are interested in learning more or rather feel like a shell “power user” already, refer to Chapter 16, Bash and Bash Scripts.
Basically, there are two different ways to start a shell from the graphical user interface which usually shows after you have booted your computer:
you can leave the graphical user interface or
you can start a terminal window within the graphical user interface.
While the first option is always available, you can only make use of the second option when you are already logged in to a desktop such as KDE or GNOME. Whichever way you choose, there is always a way back and you can switch back and forth between the shell and the graphical user interface.
If you want to give it a try, press Ctrl–Alt–F2 to leave the graphical user interface. The graphical user interface disappears and you are taken to a shell which prompts you to log in. Type your username and press Enter. Then type your password and press Enter. The prompt now changes and shows some useful information as in the following example:
1 2 3 tux@linux:~>
Your login. | |
The hostname of your computer. | |
Path to the current directory. Directly after login, the current
directory usually is your home directory, indicated by the
|
When you are logged in at a remote computer the information provided by the prompt always shows you which system you are currently working on.
When the cursor is located behind this prompt, you can pass
commands directly to your computer system. For example, you can now enter
ls -l to list the contents of the
current directory in a detailed format. If this is enough for your first
encounter with the shell and you want to go back to the graphical user
interface, you should log out from your shell session first. To do so,
type exit and press Enter.
Then press Alt–F7 to switch back to the graphical user interface. You will find
your desktop and the applications running on it unchanged.
When you are already logged in to the GNOME or the KDE desktop and want
to start a terminal window within the desktop, press Alt–F2 and enter
konsole (for KDE) or gnome-terminal
(for GNOME). This opens a terminal window on your desktop. As you are
already logged in to your desktop, the prompt shows information about
your system as described above. You can now enter commands and execute
tasks just like in any shell which runs parallel to your desktop. To
switch to another application on the desktop just click on the
corresponding application window or select it from the taskbar of your
panel. To close the terminal window press Alt–F4.
As soon as the prompt appears on the shell it is ready to receive and execute commands. A command can consist of several elements. The first element is the actual command, followed by parameters or options. You can type a command and edit it by using the following keys: ←, →, Home, End, <— (Backspace), Del, and Space. You can correct typing errors or add options. The command is not executed until you press Enter.
The shell is not verbose: in contrast to some graphical user interfaces, it usually does not provide confirmation messages when commands have been executed. Messages only appear in case of problems or errors —or if you explicitly ask for them by executing a command with a certain option.
Also keep this in mind for commands to delete objects. Before entering a
command like rm (without any option) for removing a
file, you should know if you really want to get rid of the object: it
will be deleted irretrievably, without confirmation.
In Section 15.6.1, “Permissions for User, Group and Others” you already got to know
one of the most basic commands: ls,
which used to list the contents of a directory. This
command can be used with or without options. Entering the plain
ls command shows the contents of the current
directory:
tux >ls bin Desktop Documents public_html tux.txttux >
Files in Linux may have a file extension or a suffix, such as
.txt, but do not need to have one. This makes it
difficult to differentiate between files and folders in this output of
the ls. By default, the colors in the Bash shell give
you a hint: directories are usually shown in blue, files in black.
A better way to get more details about the contents of a
directory is using the ls command with a string of
options. Options modify the way a command works so that you can get it
to carry out specific tasks. Options are separated from the command with
a blank and are usually prefixed with a hyphen. The ls
-l command shows the contents of the same
directory in full detail (long listing format):
tux >ls -l drwxr-xr-x 1 tux users 48 2015-06-23 16:08 bin drwx---r-- 1 tux users 53279 2015-06-21 13:16 Desktop drwx------ 1 tux users 280 2015-06-23 16:08 Documents drwxr-xr-x 1 tux users 70733 2015-06-21 09:35 public_html -rw-r--r-- 1 tux users 47896 2015-06-21 09:46 tux.txttux >
This output shows the following information about each object:
drwxr-xr-x1 12 tux3 users4 485 2006-06-23 16:086 bin7
Type of object and access permissions. For further information, refer to Section 15.6.1, “Permissions for User, Group and Others”. | |
Number of hard links to this file. | |
Owner of the file or directory. For further information, refer to Section 15.6.1, “Permissions for User, Group and Others”. | |
Group assigned to the file or directory. For further information, refer to Section 15.6.1, “Permissions for User, Group and Others”. | |
File size in bytes. | |
Date and time of the last change. | |
Name of the object. |
Usually, you can combine several options by prefixing only the first
option with a hyphen and then write the others consecutively without a
blank. For example, if you want to see all files in a directory in long
listing format, you can combine the two options -l and
-a (show all files) for the ls
command. Executing ls -la shows also
hidden files in the directory, indicated by a dot in front (for example,
.hiddenfile).
The list of contents you get with ls is sorted
alphabetically by filenames. But like in a graphical file manager, you
can also sort the output of ls -l
according to various criteria such as date, file extension or file size:
For date and time, use ls -lt
(displays newest first).
For extensions, use ls -lx
(displays files with no extension first).
For file size, use ls -lS
(displays largest first).
To revert the order of sorting, add -r as an option to
your ls command. For example, ls
-lr gives you the contents list sorted in reverse
alphabetical order, ls -ltr shows the
oldest files first. There are lots of other useful options for
ls. In the following section you will learn how to
investigate them.
After having entered several commands, your shell will begin to fill up with all sorts of commands and the corresponding outputs. In the following table, find some useful shortcut keys for navigating and editing in the shell.
|
Shortcut Key |
Function |
|---|---|
|
Ctrl–L |
Clears the screen and moves the current line to the top of the page. |
|
Ctrl–C |
Aborts the command which is currently being executed. |
|
Shift–Page ↑ |
Scrolls upwards. |
|
Shift–Page ↓ |
Scrolls downwards. |
|
Ctrl–U |
Deletes from cursor position to start of line. |
|
Ctrl–K |
Deletes from cursor position to the end of line. |
|
Ctrl–D |
Closes the shell session. |
|
↑, ↓ |
Browses in the history of executed commands. |
If you remember the name of command but are not sure about the options or the syntax of the command, choose one of the following possibilities:
--help/-h option
If you only want to look up the options of a certain command, try
entering the command followed by a space and --help.
This --help option exists for many commands. For
example, ls --help displays all
the options for the ls command.
To learn more about the various commands, you can also use the manual
pages. Manual pages also give a short description of what the command
does. They can be accessed with man followed by
the name of the command, for example, man ls.
Man pages are displayed directly in the shell. To navigate them, use the following keys:
Move up and down with Page ↑ and Page ↓
Move between the beginning and the end of a document with Home and End
Quit the man page viewer by pressing Q
For more information about the man command, use
man man.
Info pages usually provide even more information about commands. To
view the info page for a certain command, enter
info followed by the name of the command (for
example, info ls).
Info pages are displayed directly in the shell. To navigate them, use the following keys:
Use Space to move forward a section (node). Use <— to move backward a section.
Move up and down with Page ↑ and Page ↓
Quit the info page viewer by pressing Q
Note that man pages and info pages do not exist for all commands. Sometimes both are available (usually for key commands), sometimes only a man page or an info page exists, and sometimes neither of them are available.
To address a certain file or directory, you must specify the path leading to that directory or file. There are two ways to specify a path:
The entire path from the root directory (/) to the
relevant file or directory. For example, the absolute path to a text
file named file.txt in your
Documents directory might be:
/home/tux/Documents/file.txt
The path from the current working directory to the relevant file or
directory. If your current working directory is
/home/tux, the relative path
file.txt in your Documents
directory is:
Documents/file.txt
However, if your working directory is
/home/tux/Music instead, you need
to move up a level to /home/tux
(with ..) before you can go further down:
../Documents/file.txt
Paths contain file names, directories or both, separated by slashes. Absolute paths always start with a slash. Relative paths do not have a slash at the beginning, but can have one or two dots.
When entering commands, you can choose either way to specify a path,
depending on your preferences or the amount of typing, both will lead to
the same result. To change directories, use the cd
command and specify the path to the directory.
If a filename or the name of a directory contains a space, either escape
the space using a back slash (\) in front of the
blank or enclose the filename in single
quotes. Otherwise Bash interprets a filename like My
Documents as the names of two files or directories,
My and Documents in this case.
When specifying paths, the following “shortcuts” can save you a lot of typing:
The tilde symbol (~) is a shortcut for home
directories. For example, to list the contents of your home directory,
use ls ~. To list the contents of
another user's home directory, enter ls
~USERNAME (or
course, this will only work if you have permission to view the
contents, see Section 15.6, “File Access Permissions”). For example,
entering ls ~tux would list the
contents of the home directory of a user named tux. You can use the
tilde symbol as shortcut for home directories also if you are working
in a network environment where your home directory may not be called
/home but can be mapped to any directory in the
file system.
From anywhere in the file system, you can reach your home directory by
entering cd ~ or by simply entering
cd without any options.
When using relative paths, refer to the current directory with a dot
(.). This is mainly useful for commands such as
cp or mv by which you can copy or
move files and directories.
The next higher level in the tree is represented by two dots
(..). In order to switch to the parent directory of
your current directory, enter cd .., to go up two
levels from the current directory enter cd ../..
etc.
To apply your knowledge, find some examples below. They address basic tasks you may want to execute with files or folders using Bash.
Suppose you want to copy a file located somewhere in your home directory
to a subdirectory of /tmp that you need to create
first.
From your home directory create a subdirectory in
/tmp:
Enter
tux > mkdir /tmp/test
mkdir stands for “make directory”.
This command creates a new directory named test
in the /tmp directory. In this case, you are
using an absolute path to create the test
directory.
To check what happened, now enter
tux > ls -l /tmp
The new directory test should appear in the list
of contents of the /tmp directory.
Switch to the newly created directory with
tux > cd /tmp/test
Now create a new file in a subdirectory of your home directory and copy
it to /tmp/test. Use a relative path for this
task.
Before copying, moving or renaming a file, check if your target
directory already contains a file with the same name. If yes, consider
changing one of the filenames or use cp or
mv with options like -i, which
will prompt before overwriting an existing file. Otherwise Bash will
overwrite the existing file without confirmation.
To list the contents of your home directory, enter
tux > ls -l ~
It should contain a subdirectory called Documents
by default. If not, create this subdirectory with the
mkdir command you already know:
tux > mkdir ~/Documents
To create a new, empty file named myfile.txt in
the Documents directory, enter
tux > touch ~/Documents/myfile.txt
Usually, the touch command updates the modification
and access date for an existing file. If you use
touch with a filename which does not exist in your
target directory, it creates a new file.
Enter
tux > ls -l ~/DocumentsThe new file should appear in the list of contents.
To copy the newly created file, enter
tux > cp ~/Documents/myfile.txt .Do not forget the dot at the end.
This command tells Bash to go to your home directory and to copy
myfile.txt from the
Documents subdirectory to the current directory,
/tmp/test, without changing the name of the file.
Check the result by entering
tux > ls -l
The file myfile.txt should appear in the list of
contents for /tmp/test.
Now suppose you want to rename myfile.txt into
tuxfile.txt. Finally you decide to remove the
renamed file and the test subdirectory.
To rename the file, enter
tux > mv myfile.txt tuxfile.txtTo check what happened, enter
tux > ls -l
Instead of myfile.txt,
tuxfile.txt should appear in the list of
contents.
mv stands for move and is used
with two options: the first option specifies the source, the second
option specifies the target of the operation. You can use
mv either
to rename a file or a directory,
to move a file or directory to a new location or
to do both in one step.
Coming to the conclusion that you do not need the file any longer, you can delete it by entering
tux > rm tuxfile.txtBash deletes the file without any confirmation.
Move up one level with cd .. and check with
tux > ls -l test
if the test directory is empty now.
If yes, you can remove the test directory by
entering
tux > rmdir test
root, also called the superuser, has privileges which authorize him
to access all parts of the system and to execute administrative tasks. He
or she has the unrestricted capacity to make changes to the system and
has unlimited access to all files. Therefore performing some
administrative tasks or running certain programs such as YaST requires
root permissions.
su #
In order to temporarily become root in a shell, proceed as
follows:
Enter su. You are prompted for the root
password.
Enter the password. If you mistyped the root password, the shell
displays a message. In this case, you have to re-enter
su before retyping the password. If your password
is correct, a hash symbol # appears at the end of
the prompt, signaling that you are acting as root now.
Execute your task. For example, transfer ownership of a file to a new
user which only root is allowed to do:
tux >chownwilberkde_quick.xml
After having completed your tasks as root, switch back to your
normal user account. To do so, enter
tux > exitThe hash symbol disappears and you are acting as “normal” user again.
sudo #
Alternatively, you can also use sudo (superuser
“do”) to execute some tasks which normally are for
roots only. With sudo, administrators can grant certain users
root privileges for some commands. Depending on the system
configuration, users can then run root commands by entering their
normal password only. Due to a timestamp function, users are only
granted a “ticket” for a restricted period of time after
having entered their password. The ticket usually expires after a few
minutes. In openSUSE, sudo requires the root password by default
(if not configured otherwise by your system administrator).
For users, sudo is convenient as it prevents you from switching accounts
twice (to root and back again). To change the ownership of a file
using sudo, only one command is necessary instead of three:
tux >sudochownwilberkde_quick.xml
After you have entered the password which you are prompted for, the
command is executed. If you enter a second root command shortly
after that, you are not prompted for the password again, because your
ticket is still valid. After a certain amount of time, the ticket
automatically expires and the password is required again. This also
prevents unauthorized persons from gaining root privileges in case
a user forgets to switch back to his normal user account again and
leaves a root shell open.
In Linux, objects such as files or folders or processes generally belong to the user who created or initiated them. There are some exceptions to this rule. For more information about the exceptions, refer to Chapter 10, Access Control Lists in Linux. The group which is associated with a file or a folder depends on the primary group the user belongs to when creating the object.
When you create a new file or directory, initial access permissions for
this object are set according to a predefined scheme. As an owner of a
file or directory, you can change the access permissions for this object.
For example, you can protect files holding sensitive data against read
access by other users and you can authorize the members of your group or
other users to write, read, or execute several of your files where
appropriate. As root, you can also change the ownership of files or
folders.
Three permission sets are defined for each file object on a Linux system. These sets include the read, write, and execute permissions for each of three types of users—the owner, the group, and other users.
The following example shows the output of an ls
-l command in a shell. This command lists the
contents of a directory and shows the details for each file and folder in
that directory.
-rw-r----- 1 tux users 0 2015-06-23 16:08 checklist.txt -rw-r--r-- 1 tux users 53279 2015-06-21 13:16 gnome_quick.xml -rw-rw---- 1 tux users 0 2015-06-23 16:08 index.htm -rw-r--r-- 1 tux users 70733 2015-06-21 09:35 kde-start.xml -rw-r--r-- 1 tux users 47896 2015-06-21 09:46 kde_quick.xml drwxr-xr-x 2 tux users 48 2015-06-23 16:09 local -rwxr--r-- 1 tux users 624398 2015-06-23 15:43 tux.sh
As shown in the third column, all objects belong to user
tux. They are
assigned to the group
users which is the
primary group the user tux belongs to.
To retrieve the access permissions the first column of the list must be
examined more closely. Let's have a look at the file
kde-start.xml:
|
Type |
User Permissions |
Group Permissions |
Permissions for Others |
|
|
|
|
|
The first column of the list consists of one leading character followed
by nine characters grouped in three blocks. The leading character
indicates the file type of the object: in this case, the hyphen
(–) shows that
kde-start.xml is a file. If you find the character
d instead, this shows that the object is a directory,
like local in
Example 15.1, “Access Permissions For Files and Folders”.
The next three blocks show the access permissions for the owner, the
group and other users (from left to right). Each block follows the same
pattern: the first position shows read permissions
(r), the next position shows write permissions
(w), the last one shows execute permission
(x). A lack of either permission is indicated by
-. In our example, the owner of
kde-start.xml has read and write access to the file
but cannot execute it. The users group can read
the file but cannot write or execute it. The same holds true for the
other users as shown in the third block of characters.
Access permissions have a slightly different impact depending on the type of object they apply to: file or directory. The following table shows the details:
|
Access Permission |
File |
Folder |
|---|---|---|
|
Read (r) |
Users can open and read the file. |
Users can view the contents of the directory. Without this
permission, users cannot list the contents of this directory with
|
|
Write (w) |
Users can change the file: They can add or drop data and can even delete the contents of the file. However, this does not include the permission to remove the file completely from the directory as long as they do not have write permissions for the directory where the file is located. |
Users can create, rename or delete files in the directory. |
|
Execute (x) |
Users can execute the file. This permission is only relevant for files like programs or shell scripts, not for text files. If the operating system can execute the file directly, users do not need read permission to execute the file. However, if the file must me interpreted like a shell script or a perl program, additional read permission is needed. |
Users can change into the directory and execute files there. If they do not have read access to that directory they cannot list the files but can access them nevertheless if they know of their existence. |
Note that access to a certain file is always dependent on the correct combination of access permissions for the file itself and the directory it is located in.
In Linux, objects such as files or folder or processes generally belong to the user who created or initiated them. The group which is associated with a file or a folder depends on the primary group the user belongs to when creating the object. When you create a new file or directory, initial access permissions for this object are set according to a predefined scheme. For further details refer to Section 15.6, “File Access Permissions”.
As the owner of a file or directory (and, of course, as
root), you can change the
access permissions to this object.
To change object attributes like access permissions of a file or folder,
use the chmod command followed by the following parameters:
the users for which to change the permissions,
the type of access permission you want to remove, set or add and
the files or folders for which you want to change permissions separated by spaces.
The users for which you can change file access permissions fall into the
following categories: the owner of the file (user, u),
the group that own the file (group, g) and the other
users (others, o). You can add, remove or set one or
more of the following permissions: read, write or execute.
As root, you can also change the ownership of a file: with the
command chown
(change owner) you can transfer ownership to a new user.
The following example shows the output of an ls
-l command in a shell.
-rw-r----- 1 tux users 0 2015-06-23 16:08 checklist.txt -rw-r--r-- 1 tux users 53279 2015-06-21 13:16 gnome_quick.xml -rw-rw---- 1 tux users 0 2015-06-23 16:08 index.htm -rw-r--r-- 1 tux users 70733 2015-06-21 09:35 kde-start.xml -rw-r--r-- 1 tux users 47896 2015-06-21 09:46 kde_quick.xml drwxr-xr-x 2 tux users 48 2015-06-23 16:09 local -r-xr-xr-x 1 tux users 624398 2015-06-23 15:43 tux.jpg
In the example above, user tux owns
the file kde-start.xml and has read and write
access to the file but cannot execute it. The
users group can read the file but cannot write
or execute it. The same holds true for the other users as shown by the
third block of characters.
Suppose you are tux and want to
modify the access permissions to your files:
If you want to grant the users group also
write access to kde-start.xml, enter
tux > chmod g+w kde-start.xml
To grant the users group and other users
write access to kde-start.xml, enter
tux > chmod go+w kde-start.xmlTo remove write access for all users, enter
tux > chmod -w kde-start.xml
If you do not specify any kind of users, the changes apply to all
users— the owner of the file, the owning group and the others.
Now even the owner tux does not
have write access to the file without first reestablishing write
permissions.
To prohibit the users group and others to
change into the directory local, enter
tux > chmod go-x local
To grant others write permissions for two files, for
kde_quick.xml and
gnome_quick.xml, enter
tux > chmod o+w kde_quick.xml gnome_quick.xml
Suppose you are tux and want to
transfer the ownership of the file kde_quick.xml
to an other user, named wilber. In
this case, proceed as follows:
Enter the username and password for root.
Enter
root #chownwilberkde_quick.xml
Check what happened with
tux > ls -l kde_quick.xmlYou should get the following output:
-rw-r--r-- 1 wilber users 47896 2006-06-21 09:46 kde_quick.xml
If the ownership is set according to your wishes, switch back to your normal user account.
Entering commands in Bash can involve a lot of typing. This section introduces some features that can save you both time and typing.
By default, Bash “remembers” commands you have entered. This feature is called history. You can browse through commands that have been entered before, select one you want to repeat and then execute it again. To do so, press ↑ repeatedly until the desired command appears at the prompt. To move forward through the list of previously entered commands, press ↓. For easier repetition of a certain command from Bash history, just type the first letter of the command you want to repeat and press Page ↑.
You can now edit the selected command (for example, change the name of a file or a path), before you execute the command by pressing Enter. To edit the command line, move the cursor to the desired position using the arrow keys and start typing.
You can also search for a certain command in the history. Press Ctrl–R to start an incremental search function. showing the following prompt:
tux > (reverse-i-search)`':Just type one or several letters from the command you are searching for. Each character you enter narrows down the search. The corresponding search result is shown on the right side of the colon whereas your input appears on the left of the colon. To accept a search result, press Esc. The prompt now changes to its normal appearance and shows the command you chose. You can now edit the command or directly execute it by pressing Enter.
Completing a filename or directory name to its full length after typing its first letters is another helpful feature of Bash. To do so, type the first letters then press →| (Tabulator). If the filename or path can be uniquely identified, it is completed at once and the cursor moves to the end of the filename. You can then enter the next option of the command, if necessary. If the filename or path cannot be uniquely identified (because there are several filenames starting with the same letters), the filename or path is only completed up to the point where it becomes ambiguous again. You can then obtain a list of them by pressing →| a second time. After this, you can enter the next letters of the file or path then try completion again by pressing →|. When completing filenames and paths with →|, you can simultaneously check whether the file or path you want to enter really exists (and you can be sure of getting the spelling right).
You can replace one or more characters in a filename with a wild card for pathname expansion. Wild cards are characters that can stand for other characters. There are three different types of these in Bash:
|
Wild Card |
Function |
|
|
Matches exactly one arbitrary character |
|
|
Matches any number of characters |
|
|
Matches one of the characters from the group specified inside the square brackets, which is represented here by the string SET. |
The following examples illustrate how to make use of these convenient features of Bash.
If you already did the example Section 15.4.1, “Examples for Working with Files and Directories”, your shell buffer should be filled with commands which you can retrieve using the history function.
Press ↑ repeatedly until cd ~
appears.
Press Enter to execute the command and to switch to your home directory.
By default, your home directory contains two subdirectories starting
with the same letter, Documents and
Desktop.
Type cd D and press →|.
Nothing happens since Bash cannot identify to which one of the subdirectories you want to change.
Press →| again to see the list of possible choices:
tux >cd DDesktop/ Documents/ Downloads/tux >cd D
The prompt still shows your initial input. Type the next character of the subdirectory you want to go to and press →| again.
Bash now completes the path.
You can now execute the command with Enter.
Now suppose that your home directory contains several files with
various file extensions. It also holds several versions of one file
which you saved under different filenames
myfile1.txt, myfile2.txt etc.
You want to search for certain files according to their properties.
First, create some test files in your home directory:
Use the touch command to create several (empty)
files with different file extensions, for example
.pdf, .xml and
.jpg.
You can do this consecutively (do not forget to use the Bash history
function) or with only one touch command: simply
add several filenames separated by a space.
Create at least two files that have the same file extension, for
example .html.
To create several “versions” of one file, enter
tux > touch myfile{1..5}.txt
This command creates five consecutively numbered files:
myfile1.txt, …,
myfile5.txt.
List the contents of the directory. It should look similar to this:
tux > ls -l
-rw-r--r-- 1 tux users 0 2006-07-14 13:34 foo.xml
-rw-r--r-- 1 tux users 0 2006-07-14 13:47 home.html
-rw-r--r-- 1 tux users 0 2006-07-14 13:47 index.html
-rw-r--r-- 1 tux users 0 2006-07-14 13:47 toc.html
-rw-r--r-- 1 tux users 0 2006-07-14 13:34 manual.pdf
-rw-r--r-- 1 tux users 0 2006-07-14 13:49 myfile1.txt
-rw-r--r-- 1 tux users 0 2006-07-14 13:49 myfile2.txt
-rw-r--r-- 1 tux users 0 2006-07-14 13:49 myfile3.txt
-rw-r--r-- 1 tux users 0 2006-07-14 13:49 myfile4.txt
-rw-r--r-- 1 tux users 0 2006-07-14 13:49 myfile5.txt
-rw-r--r-- 1 tux users 0 2006-07-14 13:32 tux.pngWith wild cards, select certain subsets of the files according to various criteria:
To list all files with the .html extension,
enter
tux > ls -l *.html
To list all “versions” of
myfile.txt, enter
tux > ls -l myfile?.txt
Note that you can only use the ? wild card here
because the numbering of the files is single-digit. As soon as you
have a file named myfile10.txt you must to use
the * wild card to view all versions of
myfile.txt (or add another question mark, so
your string looks like myfile??.txt).
To remove, for example, version 1-3 and version 5 of
myfile.txt, enter
tux > rm myfile[1-3,5].txtCheck the result with
tux > ls -l
Of all myfile.txt versions only
myfile4.txt should be left.
You can also combine several wild cards in one command. In the example
above, rm myfile[1-3,5].* would lead to the same
result as rm myfile[1-3,5].txt because there are only
files with the extension .txt available.
rm Commands
Wildcards in a rm command can be very useful but
also dangerous: you might delete more files from your directory than
intended. To see which files would be affected by the
rm, run your wildcard string with
ls instead of rm first.
In order to edit files from the command line, you will need to know the vi editor. vi is a default editor which can be found on nearly every UNIX/Linux system. It can run several operating modes in which the keys you press have different functions. This does not make it very easy for beginners, but you should know at least the most basic operations with vi. There may be situations where no other editor than vi is available.
Basically, vi makes use of three operating modes:
In this mode, vi accepts certain key combinations as commands. Simple tasks such as searching words or deleting a line can be executed.
In this mode, you can write normal text.
In this mode, also known as colon mode (as you have to enter a colon to switch to this mode), vi can execute also more complex tasks such as searching and replacing text.
In the following (very simple) example, you will learn how to open and edit a file with vi, how to save your changes and quit vi.
In the following, find several commands that you can enter in vi by just pressing keys. These appear in uppercase as on a keyboard. If you need to enter a key in uppercase, this is stated explicitly by showing a key combination including the Shift key.
To create and open a new file with vi, enter
tux > vi textfile.txtBy default, vi opens in command mode in which you cannot enter text.
Press I to switch to insert mode. The bottom line changes and indicates that you now can insert text.
Write some sentences. If you want to insert a new line, first press Esc to switch back to command mode. Press O to insert a new line and to switch to insert mode again.
In the insert mode, you can edit the text with the arrow keys and with Del.
To leave vi, press Esc to switch to command mode again. Then press : which takes you to the extended mode. The bottom line now shows a colon.
To leave vi and save your changes, type wq
(w for write;
q for quit) and press
Enter. If you want to save the file under
a different name, type w
FILENAME and press
Enter.
To leave vi without saving, type q! instead and
press Enter.
Bash offers you several commands to search for files and to search for the contents of files:
find
With find, search for a file in a given directory.
The first argument specifies the directory in which to start the
search. The option -name must be followed by a search
string, which may also include wild cards. Unlike
locate, which uses a database,
find scans the actual directory.
grep
The grep command finds a specific search string in
the specified text files. If the search string is found, the command
displays the line in which searchstring was found,
along with the filename. If desired, use wild cards to specify
filenames.
To search your home directory for all occurrences of filenames that
contain the file extension .txt, use:
tux > find ~ -name '*.txt' -print
To search a directory (in this case, your home directory) for all
occurrences of files which contain, for example, the word
music, use:
tux > grep music ~/*
grep is case-sensitive by default. Hence, with the
command above you will not find any files containing
Music.To ignore case, use the
-i option.
To use a search string which consists of more than one word, enclose the string in double quotation marks, for example:
tux > grep "music is great" ~/*
When searching for the contents of a file with grep,
the output gives you the line in which the
searchstring was found along with the filename. Often
this contextual information is still not enough information to decide
whether you want to open and edit this file. Bash offers you several
commands to have a quick look at the contents of a text file directly in
the shell, without opening an editor.
head
With head you can view the first lines of a text
file. If you do not specify the command any further,
head shows the first 10 lines of a text file.
tail
The tail command is the counterpart of
head. If you use tail without
any further options it displays the last 10 lines of a text file. This
can be very useful to view log files of your system, where the most
recent messages or log entries are usually found at the end of the
file.
less
With less, display the whole contents of a text
file. To move up and down half a page use Page ↑
and Page ↓. Use Space to
scroll down one page. Home takes you to the
beginning, and End to the end of the document. To
end the viewing mode, press Q.
more
Instead of less, you can also use the older program
more. It has basically the same
function—however, it is less convenient because it does not
allow you to scroll backward. Use Space to move
forward. When you reach the end of the document, the viewer closes
automatically.
cat
The cat command displays the contents of a file,
printing the entire contents to the screen without interruption. As
cat does not allow you to scroll it is not very
useful as viewer but it is rather often used in combination with other
commands.
Sometimes it would be useful if you could write the output of a command to a file for further editing or if you could combine several commands, using the output of one command as the input for the next one. The shell offers this function by means of redirection or pipes.
Normally, the standard output in the shell is your screen (or an open shell window) and the standard input is the keyboard. With certain symbols you can redirect the input or the output to another object, such as a file or another command.
With > you can forward the output of a command
to a file (output redirection), with < you can
use a file as input for a command (input redirection).
By means of a pipe symbol | you can also redirect
the output: with a pipe, you can combine several commands, using the
output of one command as input for the next command. In contrast to
the other redirection symbols > and <, the use of the pipe is
not constrained to files.
To write the output of a command like ls to a file,
enter
tux > ls -l > filelist.txt
This creates a file named filelist.txt that
contains the list of contents of your current directory as generated
by the ls command.
However, if a file named filelist.txt already
exists, this command overwrites the existing file. To prevent this,
use >> instead of >. Entering
tux > ls -l >> filelist.txt
simply appends the output of the ls command to an
already existing file named filelist.txt. If the
file does not exist, it is created.
Redirections also works the other way round. Instead of using the standard input from the keyboard for a command, you can use a file as input:
tux > sort < filelist.txt
This will force the sort command to get its input
from the contents of filelist.txt. The result is
shown on the screen. Of course, you can also write the result into
another file, using a combination of redirections:
tux > sort < filelist.txt > sorted_filelist.txt
If a command generates a lengthy output, like ls
-l may do, it may be useful to pipe the
output to a viewer like less to be able to scroll
through the pages. To do so, enter
tux > ls -l | less
The list of contents of the current directory is shown in
less.
The pipe is also often used in combination with the
grep command in order to search for a certain
string in the output of another command. For example, if you want to
view a list of files in a directory which are owned by the user
tux, enter
tux > ls -l | grep tux
As you have seen in Section 15.8, “Editing Texts”, programs can be
started from the shell. Applications with a graphical user interface need
the X Window System and can only be started from a terminal window within
a graphical user interface. For example, if you want to open a file named
vacation.pdf in your home directory from a terminal
window in KDE or GNOME, simply run
okular ~/vacation.pdf (or
evince ~/vacation.pdf) to start a PDF viewer
displaying your file.
When looking at the terminal window again you will realize that the
command line is blocked as long as the PDF viewer is open, meaning that
your prompt is not available. To change this, press Ctrl–Z to suspend
the process and enter bg to send the process to the
background.
Now you can still have a look at vacation.pdf while
your prompt is available for further commands. An easier way to achieve
this is by sending a process to the background directly when starting it.
To do so, add an ampersand at the end of the command:
tux > okular ~/vacation.pdf &
If you have started several background processes (also named jobs) from
the same shell, the jobs
command gives you an overview of the jobs. It also shows the
job number in brackets and their status:
tux > jobs
[1] Running okular book.opensuse.startup-xep.pdf &
[2]- Running okular book.opensuse.reference-xep.pdf &
[3]+ Stopped man jobs
To bring a job to the foreground again, enter
fg JOB_NUMBER.
Whereas job only shows the background
processes started from a specific shell, the ps
command (run without options) shows a list of all your
processes—those you started. Find an example output below:
tux > ps
PID TTY TIME CMD
15500 pts/1 00:00:00 bash
28214 pts/1 00:00:00 okular
30187 pts/1 00:00:00 kwrite
30280 pts/1 00:00:00 ps
In case a program cannot be terminated in the normal way,
use the kill command to stop the process (or
processes) belonging to that program. To do so, specify the process ID
(PID) shown by the output of ps. For example, to shut
down the KWrite editor in the example above, enter
tux > kill 30187This sends a TERM signal that instructs the program to shut itself down.
Alternatively, if the program or process you want to terminate is a
background job and is shown by the jobs command, you
can also use the kill command in combination with the
job number to terminate this process. When identifying the job with the
job number, you must prefix the number with a percent character
(%):
tux > kill %JOB_NUMBER
If kill does not help—as is sometimes the case
for “runaway” programs—try
tux > kill -9 PIDThis sends a KILL signal instead of a TERM signal, usually bringing the specified process to an end.
This section is intended to introduce the most basic set of commands for handling jobs and processes. Find an overview for system administrators in Section 2.3, “Processes”.
On Linux, there are two types of commands that make data easier to transfer:
Archivers, which create a big file out of several smaller ones. The most
commonly used archiver is tar, another example is
cpio.
Compressors, which losslessly make a file smaller. The most commonly
used compressors are gzip and
bzip2.
When combining these two types of commands, their effect is comparable to
the compressed archive files that are prevalent on other operating
systems, for example, ZIP or RAR.
To pack the test directory with all its
files and subdirectories into an archive named
testarchive.tar, do the following:
Open a shell.
Use cd to change to your home directory where the
test directory is located.
Commpress the file with:
tux > tar -cvf testarchive.tar test
The -c option creates the archive, making it a file
as directed by
-f. The -v option lists the files as
they are processed.
The test directory with all its files and
directories has remained unchanged on your hard disk.
View the contents of the archive file with:
tux > tar -tf testarchive.tarTo unpack the archive, use:
tux > tar -xvf testarchive.tarIf files in your current directory are named the same as the files in the archive, they will be overwritten without warning.
To compress files, use gzip or, for better
compression, bzip2.
For this example, reuse the archive
testarchive.tar from
Procedure 15.8, “Archiving Files”.
To compress the archive, use:
tux > gzip testarchive.tar
With ls, now see that the file
testarchive.tar is no longer there and that the
file testarchive.tar.gz has been created instead.
As an alternative, use bzip2 testarchive.tar which
works analogously but provides somewhat better compression.
Now decompress and unarchive the file again:
This can be done in two steps by first decompressing and then unarchiving the file:
tux >gzip --decompress testarchive.tar.gztux >tar -xvf testarchive.tar
You can also decompress and unarchive in one step:
tux > tar -xvf testarchive.tar
With ls, you can see that a new
test directory has been created with the same
contents as your test directory in your home
directory.
This section provides an overview of the most important Linux commands. There are many more commands than listed in this chapter. Along with the individual commands, parameters are listed and, where appropriate, a typical sample application is introduced.
Adjust the parameters to your needs. It makes no sense to write ls
file if no file named file actually exists.
You can usually combine several parameters, for example, by writing
ls -la instead of ls -l -a.
The following section lists the most important commands for file management. It covers everything from general file administration to the manipulation of file system ACLs.
ls OPTIONS FILES
If you run ls without any additional parameters,
the program lists the contents of the current directory in short
form.
-lDetailed list
-aDisplays hidden files
cp OPTIONS SOURCE TARGET
Copies source to target.
Waits for confirmation, if necessary, before an existing
target is overwritten
Copies recursively (includes subdirectories)
mv OPTIONS SOURCE TARGET
Copies source to target
then deletes the original source.
Creates a backup copy of the source before
moving
Waits for confirmation, if necessary, before an existing
targetfile is overwritten
rm OPTIONS FILES
Removes the specified files from the file system. Directories are not
removed by rm unless the option
-r is used.
-rDeletes any existing subdirectories
-iWaits for confirmation before deleting each file
ln OPTIONS SOURCE TARGET
Creates an internal link from source to
target. Normally, such a link points directly to
source on the same file system. However, if
ln is executed with the -s
option, it creates a symbolic link that only points to the directory
in which source is located, enabling linking
across file systems.
Creates a symbolic link
cd OPTIONS DIRECTORY
Changes the current directory. cd without any
parameters changes to the user's home directory.
mkdir OPTIONS DIRECTORYCreates a new directory.
rmdir OPTIONS DIRECTORYDeletes the specified directory if it is already empty.
chown OPTIONS USER_NAME[:GROUP] FILESTransfers ownership of a file to the user with the specified user name.
-RChanges files and directories in all subdirectories
chgrp OPTIONS GROUP_NAME FILES
Transfers the group ownership of a given file to
the group with the specified group name. The file owner can change
group ownership only if a member of both the current and the new
group.
chmod OPTIONS MODE FILESChanges the access permissions.
The mode parameter has three parts:
group, access, and
access type. group accepts the
following characters:
User
Group
Others
For access, grant access with +
and deny it with -.
The access type is controlled by the following
options:
Read
Write
Execute—executing files or changing to the directory
Setuid bit—the application or program is started as if it were started by the owner of the file
As an alternative, a numeric code can be used. The four digits of this code are composed of the sum of the values 4, 2, and 1—the decimal result of a binary mask. The first digit sets the set user ID (SUID) (4), the set group ID (2), and the sticky (1) bits. The second digit defines the permissions of the owner of the file. The third digit defines the permissions of the group members and the last digit sets the permissions for all other users. The read permission is set with 4, the write permission with 2, and the permission for executing a file is set with 1. The owner of a file would usually receive a 6 or a 7 for executable files.
gzip PARAMETERS FILES
This program compresses the contents of files using complex
mathematical algorithms. Files compressed in this way are given the
extension .gz and need to be uncompressed before
they can be used. To compress several files or even entire
directories, use the tar command.
Decompresses the packed gzip files so they return to their
original size and can be processed normally (like the command
gunzip)
tar OPTIONS ARCHIVE FILES
tar puts one or more files into an archive.
Compression is optional. tar is a quite complex
command with several options available. The most frequently used
options are:
-fWrites the output to a file and not to the screen as is usually the case
-cCreates a new TAR archive
-rAdds files to an existing archive
-tOutputs the contents of an archive
-uAdds files, but only if they are newer than the files already contained in the archive
-xUnpacks files from an archive (extraction)
-z
Packs the resulting archive with gzip
-j
Compresses the resulting archive with bzip2
-vLists files processed
The archive files created by tar end with
.tar. If the TAR archive was also compressed
using gzip, the ending is
.tgz or .tar.gz. If it was
compressed using bzip2, the ending is
.tar.bz2.
find OPTIONS
With find, search for a file in a given directory.
The first argument specifies the directory in which to start the
search. The option -name must be followed by a
search string, which may also include wild cards. Unlike
locate, which uses a database,
find scans the actual directory.
file OPTIONS FILES
In Linux, files can have a file extensions but do not need to have
one. The file determines the file type of a given
file. With the output of file, you can then choose
an appropriate application with which to open the file.
Tries to look inside compressed files
cat OPTIONS FILES
The cat command displays the contents of a file,
printing the entire contents to the screen without interruption.
Numbers the output on the left margin
less OPTIONS FILESThis command can be used to browse the contents of the specified file. Scroll half a screen page up or down with Page ↑ and Page ↓ or a full screen page down with Space. Jump to the beginning or end of a file using Home and End. Press Q to quit the program.
grep OPTIONS SEARCH_STRING FILES
The grep command finds a specific search string in
the specified files. If the search string is found, the command
displays the line in which SEARCH_STRING was
found along with the file name.
-iIgnores case
-HOnly displays the names of the relevant files, but not the text lines
-nAdditionally displays the numbers of the lines in which it found a hit
-l
Only lists the files in which searchstring does
not occur
diff OPTIONS FILE_1 FILE_2
The diff command compares the contents of any two
files. The output produced by the program lists all lines that do not
match. This is frequently used by programmers who need only to send
their program alterations and not the entire source code.
-qOnly reports whether the two files differ
-uProduces a “unified” diff, which makes the output more readable
mount OPTIONS DEVICE MOUNT_POINTThis command can be used to mount any data media, such as hard disks, CD-ROM drives, and other drives, to a directory of the Linux file system.
-rMount read-only
-t FILE_SYSTEM
Specify the file system: For Linux hard disks, this is commonly
ext4, xfs, or
btrfs.
For hard disks not defined in the file
/etc/fstab, the device type must also be
specified. In this case, only
root can mount it. If the
file system needs to also be mounted by other users, enter the option
user in the appropriate line in the
/etc/fstab file (separated by commas) and save
this change. Further information is available in the
mount(1) man page.
umount OPTIONS MOUNT_POINT
This command unmounts a mounted drive from the file system. To
prevent data loss, run this command before taking a removable data
medium from its drive. Normally, only
root is allowed to run the
commands mount and umount. To
enable other users to run these commands, edit the
/etc/fstab file to specify the option
user for the relevant drive.
The following section lists a few of the most important commands needed for retrieving system information and controlling processes and the network.
df OPTIONS DIRECTORY
The df (disk free) command, when used without any
options, displays information about the total disk space, the disk
space currently in use, and the free space on all the mounted drives.
If a directory is specified, the information is limited to the drive
on which that directory is located.
-hShows the number of occupied blocks in gigabytes, megabytes, or kilobytes—in human-readable format
-TType of file system (ext2, nfs, etc.)
du OPTIONS PATHThis command, when executed without any parameters, shows the total disk space occupied by files and subdirectories in the current directory.
-aDisplays the size of each individual file
-hOutput in human-readable form
-sDisplays only the calculated total size
free OPTIONS
The command free displays information about RAM
and swap space usage, showing the total and the used amount in both
categories. See Section 15.1.7, “The free Command” for more information.
-bOutput in bytes
-kOutput in kilobytes
-mOutput in megabytes
date OPTIONS
This simple program displays the current system time. If run as
root, it can also be used
to change the system time. Details about the program are available in
the date(1) man page.
top OPTIONS
top provides a quick overview of the currently
running processes. Press H to access a page that
briefly explains the main options for customizing the program.
ps OPTIONS PROCESS_IDIf run without any options, this command displays a table of all your own programs or processes—those you started. The options for this command are not preceded by hyphen.
Displays a detailed list of all processes, independent of the owner
kill OPTIONS PROCESS_ID
Unfortunately, sometimes a program cannot be terminated in the normal
way. In most cases, you should still be able to stop such a runaway
program by executing the kill command, specifying
the respective process ID (see top and
ps). kill sends a
TERM signal that instructs the program to shut
itself down. If this does not help, the following parameter can be
used:
Sends a KILL signal instead of a TERM signal, bringing the specified process to an end in almost all cases
killall OPTIONS PROCESS_NAME
This command is similar to kill, but uses the
process name (instead of the process ID) as an argument, ending all
processes with that name.
ping OPTIONS HOSTNAME_OR_IP_ADDRESS
The ping command is the standard tool for testing
the basic functionality of TCP/IP networks. It sends a small data
packet to the destination host, requesting an immediate reply. If
this works, ping displays a message to that
effect, which indicates that the network link is functioning.
-cNUMBERDetermines the total number of packages to send and ends after they have been dispatched (by default, there is no limitation set)
-f
flood ping: sends as many data packages as
possible; a popular means, reserved for
root, to test networks
-iVALUESpecifies the interval between two data packages in seconds (default: one second)
host OPTIONS HOSTNAME SERVERThe domain name system resolves domain names to IP addresses. With this tool, send queries to name servers (DNS servers).
ssh OPTIONS [USER@]HOSTNAME COMMANDSSH is actually an Internet protocol that enables you to work on remote hosts across a network. SSH is also the name of a Linux program that uses this protocol to enable operations on remote computers.
passwd OPTIONS USER_NAME
Users may change their own passwords at any time using this command.
The administrator root can
use the command to change the password of any user on the system.
su OPTIONS USER_NAME
The su command makes it possible to log in under a
different user name from a running session. Specify a user name and the
corresponding password. The password is not required from
root, because
root is authorized to
assume the identity of any user. When using the command without
specifying a user name, you are prompted for the
root password and change to
the superuser (root). Use
su - to start a login shell for a different user.
halt OPTIONSTo avoid loss of data, you should always use this program to shut down your system.
reboot OPTIONS
Does the same as halt except the system performs
an immediate reboot.
clearThis command cleans up the visible area of the console. It has no options.
There are many more commands than listed in this chapter. For information about other commands or more detailed information, also see the publication Linux in a Nutshell by O'Reilly.
Today, many people use computers with a graphical user interface (GUI) like GNOME. Although they offer lots of features, their use is limited when it comes to the execution of automated tasks. Shells are a good addition to GUIs and this chapter gives you an overview of some aspects of shells, in this case Bash.
Traditionally, the shell is Bash (Bourne again Shell). When this chapter speaks about “the shell” it means Bash. There are actually more available shells than Bash (ash, csh, ksh, zsh, …), each employing different features and characteristics. If you need further information about other shells, search for shell in YaST.
A shell can be invoked as an:
Interactive login shell.
This is used when logging in to a machine, invoking Bash with the
--login option or when logging in to a remote machine
with SSH.
“Ordinary” interactive shell. This is normally the case when starting xterm, konsole, gnome-terminal or similar tools.
Non-interactive shell. This is used when invoking a shell script at the command line.
Depending on which type of shell you use, different configuration files are being read. The following tables show the login and non-login shell configuration files.
|
File |
Description |
|---|---|
|
|
Do not modify this file, otherwise your modifications can be destroyed during your next update! |
|
|
Use this file if you extend |
|
|
Contains system-wide configuration files for specific programs |
|
|
Insert user specific configuration for login shells here |
Note that the login shell also sources the configuration files listed under Table 16.2, “Bash Configuration Files for Non-Login Shells”.
|
|
Do not modify this file, otherwise your modifications can be destroyed during your next update! |
|
|
Use this file to insert your system-wide modifications for Bash only |
|
|
Insert user specific configuration here |
Additionally, Bash uses some more files:
|
File |
Description |
|---|---|
|
|
Contains a list of all commands you have been typing |
|
|
Executed when logging out |
|
|
User defined aliases of frequently used commands. See
|
The following table provides a short overview of the most important higher-level directories that you find on a Linux system. Find more detailed information about the directories and important subdirectories in the following list.
|
Directory |
Contents |
|---|---|
|
|
Root directory—the starting point of the directory tree. |
|
|
Essential binary files, such as commands that are needed by both the system administrator and normal users. Usually also contains the shells, such as Bash. |
|
|
Static files of the boot loader. |
|
|
Files needed to access host-specific devices. |
|
|
Host-specific system configuration files. |
|
|
Holds the home directories of all users who have accounts on the system.
However, |
|
|
Essential shared libraries and kernel modules. |
|
|
Mount points for removable media. |
|
|
Mount point for temporarily mounting a file system. |
|
|
Add-on application software packages. |
|
|
Home directory for the superuser |
|
|
Essential system binaries. |
|
|
Data for services provided by the system. |
|
|
Temporary files. |
|
|
Secondary hierarchy with read-only data. |
|
|
Variable data such as log files. |
|
|
Only available if you have both Microsoft Windows* and Linux installed on your system. Contains the Windows data. |
The following list provides more detailed information and gives some examples of which files and subdirectories can be found in the directories:
/bin
Contains the basic shell commands that may be used both by root and
by other users. These commands include ls,
mkdir, cp, mv,
rm and rmdir.
/bin also contains Bash, the default shell in
openSUSE Leap.
/boot
Contains data required for booting, such as the boot loader, the kernel, and other data that is used before the kernel begins executing user-mode programs.
/dev
Holds device files that represent hardware components.
/etc
Contains local configuration files that control the operation of programs
like the X Window System. The /etc/init.d
subdirectory contains LSB init scripts that can be executed during the
boot process.
/home/USERNAME
Holds the private data of every user who has an account on the system. The
files located here can only be modified by their owner or by the system
administrator. By default, your e-mail directory and personal desktop
configuration are located here in the form of hidden files and
directories, such as .gconf/ and
.config.
If you are working in a network environment, your home directory may be
mapped to a directory in the file system other than
/home.
/lib
Contains the essential shared libraries needed to boot the system and to run the commands in the root file system. The Windows equivalent for shared libraries are DLL files.
/media
Contains mount points for removable media, such as CD-ROMs, flash disks,
and digital cameras (if they use USB). /media
generally holds any type of drive except the hard disk of your system.
When your removable medium has been inserted or connected to the system
and has been mounted, you can access it from here.
/mnt
This directory provides a mount point for a temporarily mounted file
system. root may mount file systems here.
/opt
Reserved for the installation of third-party software. Optional software and larger add-on program packages can be found here.
/root
Home directory for the root user. The personal data of root is
located here.
/run
A tmpfs directory used by systemd and various
components. /var/run is a symbolic link to
/run.
/sbin
As the s indicates, this directory holds utilities for
the superuser. /sbin contains the binaries essential
for booting, restoring and recovering the system in addition to the
binaries in /bin.
/srv
Holds data for services provided by the system, such as FTP and HTTP.
/tmp
This directory is used by programs that require temporary storage of files.
/tmp at Boot Time
Data stored in /tmp is not guaranteed to survive a
system reboot. It depends, for example, on settings made in
/etc/tmpfiles.d/tmp.conf.
/usr
/usr has nothing to do with users, but is the acronym
for Unix system resources. The data in /usr is
static, read-only data that can be shared among various hosts compliant
with the Filesystem Hierarchy Standard (FHS). This
directory contains all application programs including the graphical
desktops such as GNOME and establishes a secondary hierarchy in the file
system. /usr holds several subdirectories, such as
/usr/bin, /usr/sbin,
/usr/local, and /usr/share/doc.
/usr/bin
Contains generally accessible programs.
/usr/sbin
Contains programs reserved for the system administrator, such as repair functions.
/usr/local
In this directory the system administrator can install local, distribution-independent extensions.
/usr/share/doc
Holds various documentation files and the release notes for your system.
In the manual subdirectory find an online version of
this manual. If more than one language is installed, this directory may
contain versions of the manuals for different languages.
Under packages find the documentation included in the
software packages installed on your system. For every package, a
subdirectory
/usr/share/doc/packages/PACKAGENAME
is created that often holds README files for the package and sometimes
examples, configuration files or additional scripts.
If HOWTOs are installed on your system /usr/share/doc
also holds the howto subdirectory in which to find
additional documentation on many tasks related to the setup and operation
of Linux software.
/var
Whereas /usr holds static, read-only data,
/var is for data which is written during system
operation and thus is variable data, such as log files or spooling data.
For an overview of the most important log files you can find under
/var/log/, refer to
Table 18.1, “Log Files”.
/windows
Only available if you have both Microsoft Windows and Linux installed on your system. Contains the Windows data available on the Windows partition of your system. Whether you can edit the data in this directory depends on the file system your Windows partition uses. If it is FAT32, you can open and edit the files in this directory. For NTFS, openSUSE Leap also includes write access support. However, the driver for the NTFS-3g file system has limited functionality.
Shell scripts provide a convenient way to perform a wide range of tasks: collecting data, searching for a word or phrase in a text and other useful things. The following example shows a small shell script that prints a text:
#!/bin/sh 1 # Output the following line: 2 echo "Hello World" 3
The first line begins with the Shebang
characters ( | |
The second line is a comment beginning with the hash sign. It is recommended to comment difficult lines to remember what they do. | |
The third line uses the built-in command |
Before you can run this script you need some prerequisites:
Every script should contain a Shebang line (as in the example above.) If the line is missing, you need to call the interpreter manually.
You can save the script wherever you want. However, it is a good idea to
save it in a directory where the shell can find it. The search path in a
shell is determined by the environment variable PATH.
Usually a normal user does not have write access to
/usr/bin. Therefore it is recommended to save your
scripts in the users' directory ~/bin/. The above
example gets the name hello.sh.
The script needs executable permissions. Set the permissions with the following command:
tux > chmod +x ~/bin/hello.shIf you have fulfilled all of the above prerequisites, you can execute the script in the following ways:
As Absolute Path.
The script can be executed with an absolute path. In our case, it is
~/bin/hello.sh.
Everywhere.
If the PATH environment variable contains the directory
where the script is located, you can execute the script with
hello.sh.
Each command can use three channels, either for input or output:
Standard Output. This is the default output channel. Whenever a command prints something, it uses the standard output channel.
Standard Input. If a command needs input from users or other commands, it uses this channel.
Standard Error. Commands use this channel for error reporting.
To redirect these channels, there are the following possibilities:
Command > File
Saves the output of the command into a file, an existing file will be
deleted. For example, the ls command writes its output
into the file listing.txt:
tux > ls > listing.txtCommand >> File
Appends the output of the command to a file. For example, the
ls command appends its output to the file
listing.txt:
tux > ls >> listing.txtCommand < File
Reads the file as input for the given command. For example, the
read command reads in the content of the file into the
variable:
tux > read a < fooCommand1 | Command2
Redirects the output of the left command as input for the right command.
For example, the cat command outputs the content of
the /proc/cpuinfo file. This output is used by
grep to filter only those lines which contain
cpu:
tux > cat /proc/cpuinfo | grep cpu
Every channel has a file descriptor: 0 (zero) for
standard input, 1 for standard output and 2 for standard error. It is
allowed to insert this file descriptor before a < or
> character. For example, the following line searches
for a file starting with foo, but suppresses its errors
by redirecting it to /dev/null:
tux > find / -name "foo*" 2>/dev/nullAn alias is a shortcut definition of one or more commands. The syntax for an alias is:
alias NAME=DEFINITION
For example, the following line defines an alias lt that
outputs a long listing (option -l), sorts it by
modification time (-t), and prints it in reverse sorted order (-r):
tux > alias lt='ls -ltr'
To view all alias definitions, use alias. Remove your
alias with unalias and the corresponding alias name.
A shell variable can be global or local. Global variables, or environment variables, can be accessed in all shells. In contrast, local variables are visible in the current shell only.
To view all environment variables, use the printenv
command. If you need to know the value of a variable, insert the name of
your variable as an argument:
tux > printenv PATH
A variable, be it global or local, can also be viewed with
echo:
tux > echo $PATHTo set a local variable, use a variable name followed by the equal sign, followed by the value:
tux > PROJECT="SLED"
Do not insert spaces around the equal sign, otherwise you get an error. To
set an environment variable, use export:
tux > export NAME="tux"
To remove a variable, use unset:
tux > unset NAMEThe following table contains some common environment variables which can be used in you shell scripts:
|
|
the home directory of the current user |
|
|
the current host name |
|
|
when a tool is localized, it uses the language from this environment
variable. English can also be set to |
|
|
the search path of the shell, a list of directories separated by colon |
|
|
specifies the normal prompt printed before each command |
|
|
specifies the secondary prompt printed when you execute a multi-line command |
|
|
current working directory |
|
|
the current user |
For example, if you have the script foo.sh you can
execute it like this:
tux > foo.sh "Tux Penguin" 2000
To access all the arguments which are passed to your script, you need
positional parameters. These are $1 for the first argument,
$2 for the second, and so on. You can have up to nine
parameters. To get the script name, use $0.
The following script foo.sh prints all arguments from 1
to 4:
#!/bin/sh echo \"$1\" \"$2\" \"$3\" \"$4\"
If you execute this script with the above arguments, you get:
"Tux Penguin" "2000" "" ""
Variable substitutions apply a pattern to the content of a variable either from the left or right side. The following list contains the possible syntax forms:
${VAR#pattern}
removes the shortest possible match from the left:
tux >file=/home/tux/book/book.tar.bz2tux >echo ${file#*/} home/tux/book/book.tar.bz2
${VAR##pattern}
removes the longest possible match from the left:
tux >file=/home/tux/book/book.tar.bz2tux >echo ${file##*/} book.tar.bz2
${VAR%pattern}
removes the shortest possible match from the right:
tux >file=/home/tux/book/book.tar.bz2tux >echo ${file%.*} /home/tux/book/book.tar
${VAR%%pattern}
removes the longest possible match from the right:
tux >file=/home/tux/book/book.tar.bz2tux >echo ${file%%.*} /home/tux/book/book
${VAR/pattern_1/pattern_2}
substitutes the content of VAR from the PATTERN_1 with PATTERN_2:
tux >file=/home/tux/book/book.tar.bz2tux >echo ${file/tux/wilber} /home/wilber/book/book.tar.bz2
Shells allow you to concatenate and group commands for conditional execution. Each command returns an exit code which determines the success or failure of its operation. If it is 0 (zero) the command was successful, everything else marks an error which is specific to the command.
The following list shows, how commands can be grouped:
Command1 ; Command2
executes the commands in sequential order. The exit code is not checked.
The following line displays the content of the file with
cat and then prints its file properties with
ls regardless of their exit codes:
tux > cat filelist.txt ; ls -l filelist.txtCommand1 && Command2
runs the right command, if the left command was successful (logical AND). The following line displays the content of the file and prints its file properties only, when the previous command was successful (compare it with the previous entry in this list):
tux > cat filelist.txt && ls -l filelist.txtCommand1 || Command2
runs the right command, when the left command has failed (logical OR).
The following line creates only a directory in
/home/wilber/bar when the creation of the directory
in /home/tux/foo has failed:
tux > mkdir /home/tux/foo || mkdir /home/wilber/barfuncname(){ ... }
creates a shell function. You can use the positional parameters to access
its arguments. The following line defines the function
hello to print a short message:
tux > hello() { echo "Hello $1"; }You can call this function like this:
tux > hello Tuxwhich prints:
Hello Tux
To control the flow of your script, a shell has while,
if, for and case
constructs.
The if command is used to check expressions. For
example, the following code tests whether the current user is Tux:
if test $USER = "tux"; then echo "Hello Tux." else echo "You are not Tux." fi
The test expression can be as complex or simple as possible. The following
expression checks if the file foo.txt exists:
if test -e /tmp/foo.txt ; then echo "Found foo.txt" fi
The test expression can also be abbreviated in angled brackets:
if [ -e /tmp/foo.txt ] ; then echo "Found foo.txt" fi
Find more useful expressions at http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/lsst/ch03sec02.html.
for Command #
The for loop allows you to execute commands to a list of
entries. For example, the following code prints some information about PNG
files in the current directory:
for i in *.png; do ls -l $i done
Important information about Bash is provided in the man pages man
bash. More about this topic can be found in the following list:
http://tldp.org/LDP/Bash-Beginners-Guide/html/index.html—Bash Guide for Beginners
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html—BASH Programming - Introduction HOW-TO
http://tldp.org/LDP/abs/html/index.html—Advanced Bash-Scripting Guide
http://www.grymoire.com/Unix/Sh.html—Sh - the Bourne Shell
openSUSE® Leap comes with various sources of information and documentation, many of which are already integrated into your installed system.
This chapter describes a range of potential problems and their solutions. Even if your situation is not precisely listed here, there may be one similar enough to offer hints to the solution of your problem.
openSUSE® Leap comes with various sources of information and documentation, many of which are already integrated into your installed system.
/usr/share/doc
This traditional help directory holds various documentation files and
release notes for your system. It contains also information of installed
packages in the subdirectory packages. Find more
detailed information in Section 17.1, “Documentation Directory”.
When working with the shell, you do not need to know the options of the commands by heart. Traditionally, the shell provides integrated help by means of man pages and info pages. Read more in Section 17.2, “Man Pages” and Section 17.3, “Info Pages”.
The help center of the GNOME desktop (Help) provides central access to the most important documentation resources on your system in searchable form. These resources include online help for installed applications, man pages, info pages, and the SUSE manuals delivered with your product.
When installing new software with YaST, the software documentation is usually installed automatically and appears in the help center of your desktop. However, some applications, such as GIMP, may have different online help packages that can be installed separately with YaST and do not integrate into the help centers.
The traditional directory to find documentation on your
installed Linux system is /usr/share/doc. Usually, the
directory contains information about the packages installed on your system,
plus release notes, manuals, and more.
In the Linux world, many manuals and other kinds of documentation are
available in the form of packages, like software. How much and which
information you find in /usr/share/docs also depends
on the (documentation) packages installed. If you cannot find the
subdirectories mentioned here, check if the respective packages are
installed on your system and add them with YaST, if needed.
We provide HTML and PDF versions of our books in different
languages. In the manual subdirectory, find HTML
versions of most of the SUSE manuals available for your product. For an
overview of all documentation available for your product refer to the
preface of the manuals.
If more than one language is installed,
/usr/share/doc/manual may contain different language
versions of the manuals. The HTML versions of the SUSE manuals are also
available in the help center of both desktops. For information on where to
find the PDF and HTML versions of the books on your installation media,
refer to the openSUSE Leap Release Notes. They are available on your
installed system under /usr/share/doc/release-notes/
or online at your product-specific Web page at https://doc.opensuse.org/release-notes/.
Under packages, find the documentation
that is included in the software packages installed on your system. For
every package, a subdirectory
/usr/share/doc/packages/PACKAGENAME
is created. It often contains README files for the package and sometimes
examples, configuration files, or additional scripts. The following list
introduces typical files to be found under
/usr/share/doc/packages. None of these entries are
mandatory and many packages might only include a few of them.
AUTHORS
List of the main developers.
BUGS
Known bugs or malfunctions. Might also contain a link to a Bugzilla Web page where you can search all bugs.
CHANGES
, ChangeLog
Summary of changes from version to version. Usually interesting for developers, because it is very detailed.
COPYING
, LICENSE
Licensing information.
FAQ
Question and answers collected from mailing lists or newsgroups.
INSTALL
How to install this package on your system. As the package is already installed by the time you get to read this file, you can safely ignore the contents of this file.
README, README.*
General information on the software. For example, for what purpose and how to use it.
TODO
Things that are not implemented yet, but probably will be in the future.
MANIFEST
List of files with a brief summary.
NEWS
Description of what is new in this version.
Man pages are an essential part of any Linux system. They explain the usage
of a command and all available options and parameters. Man pages can be
accessed with man followed by the name of the command,
for example, man ls.
Man pages are displayed directly in the shell. To navigate them, move up and
down with Page ↑ and Page ↓.
Move between the beginning and the end of a document with
Home and End. End this viewing
mode by pressing Q. Learn more about the
man command itself with man man. Man
pages are sorted in categories as shown in
Table 17.1, “Man Pages—Categories and Descriptions” (taken from the man page for man
itself).
|
Number |
Description |
|---|---|
|
1 |
Executable programs or shell commands |
|
2 |
System calls (functions provided by the kernel) |
|
3 |
Library calls (functions within program libraries) |
|
4 |
Special files (usually found in |
|
5 |
File formats and conventions ( |
|
6 |
Games |
|
7 |
Miscellaneous (including macro packages and conventions), for example, man(7), groff(7) |
|
8 |
System administration commands (usually only for |
|
9 |
Kernel routines (nonstandard) |
Each man page consists of several parts labeled NAME , SYNOPSIS , DESCRIPTION , SEE ALSO , LICENSING , and AUTHOR . There may be additional sections available depending on the type of command.
Info pages are another important source of information on your system.
Usually, they are more detailed than man pages. They consist of more than
command line options and contain sometimes whole tutorials or reference
documentation. To view the info page for a certain command, enter
info followed by the name of the command, for example,
info ls. You can browse an info page with a viewer
directly in the shell and display the different sections, called
“nodes”. Use Space to move forward and
<— to move backward. Within a node, you can also
browse with Page ↑ and Page ↓
but only Space and <— will
take you also to the previous or subsequent node. Press Q
to end the viewing mode. Not every command comes with an info page and vice
versa.
In addition to the online versions of the SUSE manuals installed under
/usr/share/doc, you can also access the
product-specific manuals and documentation on the Web. For an overview of
all documentation available for openSUSE Leap check out your
product-specific documentation Web page at
http:/doc.opensuse.org/.
If you are searching for additional product-related information, you can also refer to the following Web sites:
There are several forums where you can dive in on discussions about SUSE products. See http://forums.opensuse.org/ for a list.
Documentation for GNOME users, administrators and developers is available at http://library.gnome.org/.
The Linux Documentation Project (TLDP) is run by a team of volunteers who write Linux-related documentation (see http://www.tldp.org). It is probably the most comprehensive documentation resource for Linux. The set of documents contains tutorials for beginners, but is mainly focused on experienced users and professional system administrators. TLDP publishes HOWTOs, FAQs, and guides (handbooks) under a free license. Parts of the documentation from TLDP are also available on openSUSE Leap.
You can also try general-purpose search engines. For example, use the search
terms Linux CD-RW help or OpenOffice file
conversion problem if you have trouble with burning CDs or LibreOffice
file conversion.
This chapter describes a range of potential problems and their solutions. Even if your situation is not precisely listed here, there may be one similar enough to offer hints to the solution of your problem.
Linux reports things in a very detailed way. There are several places to look when you encounter problems with your system, most of which are standard to Linux systems in general, and some are relevant to openSUSE Leap systems. Most log files can be viewed with YaST ( › ).
YaST offers the possibility to collect all system information needed by the support team. Use › and select the problem category. When all information is gathered, attach it to your support request.
A list of the most frequently checked log files follows with the description
of their typical purpose. Paths containing ~ refer to
the current user's home directory.
|
Log File |
Description |
|---|---|
|
|
Messages from the desktop applications currently running. |
|
|
Log files from AppArmor, see Part IV, “Confining Privileges with AppArmor” for detailed information. |
|
|
Log file from Audit to track any access to files, directories, or resources of your system, and trace system calls. See Part VI, “The Linux Audit Framework” for detailed information. |
|
|
Messages from the mail system. |
|
|
Log file from NetworkManager to collect problems with network connectivity |
|
|
Directory containing Samba server and client log messages. |
|
|
All messages from the kernel and system log daemon with the “warning” level or higher. |
|
|
Binary file containing user login records for the current machine
session. View it with |
|
|
Various start-up and runtime log files from the X Window System. It is useful for debugging failed X start-ups. |
|
|
Directory containing YaST's actions and their results. |
|
|
Log file of Zypper. |
Apart from log files, your machine also supplies you with information about
the running system. See
Table 18.2: System Information With the /proc File System
/proc File System #|
File |
Description |
|---|---|
|
|
Contains processor information, including its type, make, model, and performance. |
|
|
Shows which DMA channels are currently being used. |
|
|
Shows which interrupts are in use, and how many of each have been in use. |
|
|
Displays the status of I/O (input/output) memory. |
|
|
Shows which I/O ports are in use at the moment. |
|
|
Displays memory status. |
|
|
Displays the individual modules. |
|
|
Displays devices currently mounted. |
|
|
Shows the partitioning of all hard disks. |
|
|
Displays the current version of Linux. |
Apart from the /proc file system, the Linux kernel
exports information with the sysfs module, an in-memory
file system. This module represents kernel objects, their attributes and
relationships. For more information about sysfs, see the
context of udev in Chapter 16, Dynamic Kernel Device Management with udev.
Table 18.3 contains
an overview of the most common directories under /sys.
/sys File System #|
File |
Description |
|---|---|
|
|
Contains subdirectories for each block device discovered in the system. Generally, these are mostly disk type devices. |
|
|
Contains subdirectories for each physical bus type. |
|
|
Contains subdirectories grouped together as a functional types of devices (like graphics, net, printer, etc.) |
|
|
Contains the global device hierarchy. |
Linux comes with several tools for system analysis and monitoring. See Chapter 2, System Monitoring Utilities for a selection of the most important ones used in system diagnostics.
Each of the following scenarios begins with a header describing the problem followed by a paragraph or two offering suggested solutions, available references for more detailed solutions, and cross-references to other scenarios that are related.
Boot problems are situations when your system does not boot properly (does not boot to the expected target and login screen).
If the hardware is functioning properly, it is possible that the boot loader is corrupted and Linux cannot start on the machine. In this case, it is necessary to repair the boot loader. To do so, you need to start the Rescue System as described in Section 18.5.2, “Using the Rescue System” and follow the instructions in Section 18.5.2.4, “Modifying and Re-installing the Boot Loader”.
Alternatively, you can use the Rescue System to fix the boot loader as follows. Boot your machine from the installation media. In the boot screen, choose › . Select the disk containing the installed system and kernel with the default kernel options.
When the system is booted, start YaST and switch to › . Make sure that the option is enabled, and press . This fixes the corrupted boot loader by overwriting it, or installs the boot loader if it is missing.
Other reasons for the machine not booting may be BIOS-related:
Check your BIOS for references to your hard disk. GRUB 2 may simply not be started if the hard disk itself cannot be found with the current BIOS settings.
Check whether your system's boot order includes the hard disk. If the hard disk option was not enabled, your system may install properly, but fails to boot when access to the hard disk is required.
This behavior typically occurs after a failed kernel upgrade and it is known as a kernel panic because of the type of error on the system console that sometimes can be seen at the final stage of the process. If, in fact, the machine has just been rebooted following a software update, the immediate goal is to reboot it using the old, proven version of the Linux kernel and associated files. This can be done in the GRUB 2 boot loader screen during the boot process as follows:
Reboot the computer using the reset button, or switch it off and on again.
When the GRUB 2 boot screen becomes visible, select the entry and choose the previous kernel from the menu. The machine will boot using the prior version of the kernel and its associated files.
After the boot process has completed, remove the newly installed kernel and, if necessary, set the default boot entry to the old kernel using the YaST module. For more information refer to Section 12.3, “Configuring the Boot Loader with YaST”. However, doing this is probably not necessary because automated update tools normally modify it for you during the rollback process.
Reboot.
If this does not fix the problem, boot the computer using the installation media. After the machine has booted, continue with Step 3.
If the machine starts, but does not boot into the graphical login
manager, anticipate problems either with the choice of the default systemd
target or the configuration of the X Window System. To check the current
systemd default target run the command sudo systemctl
get-default. If the value returned is not
graphical.target, run the command sudo
systemctl isolate graphical.target. If the graphical login screen
starts, log in and start › ›
and set the to . From now on the system should boot into the graphical
login screen.
If the graphical login screen does not start even if having booted or
switched to the graphical target, your desktop or X Window software is
probably misconfigured or corrupted. Examine the log files at
/var/log/Xorg.*.log for detailed messages from the X
server as it attempted to start. If the desktop fails during start, it may
log error messages to the system journal that can be queried with the
command journalctl (see Chapter 11, journalctl: Query the systemd Journal
for more information). If these error messages hint at a configuration
problem in the X server, try to fix these issues. If the graphical system
still does not come up, consider reinstalling the graphical desktop.
If a btrfs root partition
becomes corrupted, try the following options:
Mount the partition with the -o recovery option.
If that fails, run btrfs-zero-log on your root
partition.
If the root partition becomes corrupted, use the parameter
forcefsck on the boot prompt. This passes the option
-f (force) to the fsck command.
Login problems occur when your machine does boot to the expected welcome screen or login prompt, but refuses to accept the user name and password, or accepts them but then does not behave properly (fails to start the graphic desktop, produces errors, drops to a command line, etc.).
This usually occurs when the system is configured to use network
authentication or directory services and, for some reason, cannot retrieve
results from its configured servers. The
root user, as the only local
user, is the only user that can still log in to these machines. The
following are some common reasons a machine appears functional but cannot
process logins correctly:
The network is not working. For further directions on this, turn to Section 18.4, “Network Problems”.
DNS is not working at the moment (which prevents GNOME from working and the system from making validated requests to secure servers). One indication that this is the case is that the machine takes an extremely long time to respond to any action. Find more information about this topic in Section 18.4, “Network Problems”.
If the system is configured to use Kerberos, the system's local time may have drifted past the accepted variance with the Kerberos server time (this is typically 300 seconds). If NTP (network time protocol) is not working properly or local NTP servers are not working, Kerberos authentication ceases to function because it depends on common clock synchronization across the network.
The system's authentication configuration is misconfigured. Check the PAM configuration files involved for any typographical errors or misordering of directives. For additional background information about PAM and the syntax of the configuration files involved, refer to Chapter 2, Authentication with PAM.
The home partition is encrypted. Find more information about this topic in Section 18.3.3, “Login to Encrypted Home Partition Fails”.
In all cases that do not involve external network problems, the solution is to reboot the system into single-user mode and repair the configuration before booting again into operating mode and attempting to log in again. To boot into single-user mode:
Reboot the system. The boot screen appears, offering a prompt.
Press Esc to exit the splash screen and get to the GRUB 2 text-based menu.
Press B to enter the GRUB 2 editor.
Add the following parameter to the line containing the kernel parameters:
systemd.unit=rescue.target
Press F10.
Enter the user name and password for
root.
Make all the necessary changes.
Boot into the full multiuser and network mode by entering
systemctl isolate graphical.target at the command
line.
This is by far the most common problem users encounter, because there are many reasons this can occur. Depending on whether you use local user management and authentication or network authentication, login failures occur for different reasons.
Local user management can fail for the following reasons:
The user may have entered the wrong password.
The user's home directory containing the desktop configuration files is corrupted or write protected.
There may be problems with the X Window System authenticating this particular user, especially if the user's home directory has been used with another Linux distribution prior to installing the current one.
To locate the reason for a local login failure, proceed as follows:
Check whether the user remembered his password correctly before you start debugging the whole authentication mechanism. If the user may not remember his password correctly, use the YaST User Management module to change the user's password. Pay attention to the Caps Lock key and unlock it, if necessary.
Log in as root and check the
system journal with journalctl -e for error messages
of the login process and of PAM.
Try to log in from a console (using Ctrl–Alt–F1). If this is successful, the blame cannot be put on PAM, because it is possible to authenticate this user on this machine. Try to locate any problems with the X Window System or the GNOME desktop. For more information, refer to Section 18.3.4, “Login Successful but GNOME Desktop Fails”.
If the user's home directory has been used with another Linux
distribution, remove the Xauthority file in the
user's home. Use a console login via Ctrl–Alt–F1 and run rm .Xauthority as this user. This
should eliminate X authentication problems for this user. Try graphical
login again.
If the desktop could not start because of corrupt configuration files, proceed with Section 18.3.4, “Login Successful but GNOME Desktop Fails”.
In the following, common reasons a network authentication for a particular user may fail on a specific machine are listed:
The user may have entered the wrong password.
The user name exists in the machine's local authentication files and is also provided by a network authentication system, causing conflicts.
The home directory exists but is corrupt or unavailable. Perhaps it is write protected or is on a server that is inaccessible at the moment.
The user does not have permission to log in to that particular host in the authentication system.
The machine has changed host names, for whatever reason, and the user does not have permission to log in to that host.
The machine cannot reach the authentication server or directory server that contains that user's information.
There may be problems with the X Window System authenticating this particular user, especially if the user's home has been used with another Linux distribution prior to installing the current one.
To locate the cause of the login failures with network authentication, proceed as follows:
Check whether the user remembered their password correctly before you start debugging the whole authentication mechanism.
Determine the directory server which the machine relies on for authentication and make sure that it is up and running and properly communicating with the other machines.
Determine that the user's user name and password work on other machines to make sure that his authentication data exists and is properly distributed.
See if another user can log in to the misbehaving machine. If another
user can log in without difficulty or if
root can log in, log in and
examine the system journal with journalctl -e>
file. Locate the time stamps that correspond to the login attempts and
determine if PAM has produced any error messages.
Try to log in from a console (using Ctrl–Alt–F1). If this is successful, the problem is not with PAM or the directory server on which the user's home is hosted, because it is possible to authenticate this user on this machine. Try to locate any problems with the X Window System or the GNOME desktop. For more information, refer to Section 18.3.4, “Login Successful but GNOME Desktop Fails”.
If the user's home directory has been used with another Linux
distribution, remove the Xauthority file in the
user's home. Use a console login via Ctrl–Alt–F1 and run rm .Xauthority as this user. This
should eliminate X authentication problems for this user. Try graphical
login again.
If the desktop could not start because of corrupt configuration files, proceed with Section 18.3.4, “Login Successful but GNOME Desktop Fails”.
It is recommended to use an encrypted home partition for laptops. If you cannot log in to your laptop, the reason is usually simple: your partition could not be unlocked.
During the boot time, you need to enter the passphrase to unlock your encrypted partition. If you do not enter it, the boot process continues, leaving the partition locked.
To unlock your encrypted partition, proceed as follows:
Switch to the text console with Ctrl–Alt–F1.
Become root.
Restart the unlocking process again with:
root # systemctl restart home.mountEnter your passphrase to unlock your encrypted partition.
Exit the text console and switch back to the login screen with Alt–F7.
Log in as usual.
If this is the case, it is likely that your GNOME configuration files have become corrupted. Some symptoms may include the keyboard failing to work, the screen geometry becoming distorted, or even the screen coming up as a bare gray field. The important distinction is that if another user logs in, the machine works normally. It is then likely that the problem can be fixed relatively quickly by simply moving the user's GNOME configuration directory to a new location, which causes GNOME to initialize a new one. Although the user is forced to reconfigure GNOME, no data is lost.
Switch to a text console by pressing Ctrl–Alt–F1.
Log in with your user name.
Move the user's GNOME configuration directories to a temporary location:
tux >mv .gconf .gconf-ORIG-RECOVERtux >mv .gnome2 .gnome2-ORIG-RECOVER
Log out.
Log in again, but do not run any applications.
Recover your individual application configuration data (including the
Evolution e-mail client data) by copying the
~/.gconf-ORIG-RECOVER/apps/ directory back into the
new ~/.gconf directory as follows:
tux > cp -a .gconf-ORIG-RECOVER/apps .gconf/If this causes the login problems, attempt to recover only the critical application data and reconfigure the remainder of the applications.
Many problems of your system may be network-related, even though they do not seem to be at first. For example, the reason for a system not allowing users to log in may be a network problem of some kind. This section introduces a simple checklist you can apply to identify the cause of any network problem encountered.
When checking the network connection of your machine, proceed as follows:
If you use an Ethernet connection, check the hardware first. Make sure that your network cable is properly plugged into your computer and router (or hub, etc.). The control lights next to your Ethernet connector are normally both be active.
If the connection fails, check whether your network cable works with another machine. If it does, your network card causes the failure. If hubs or switches are included in your network setup, they may be faulty, as well.
If using a wireless connection, check whether the wireless link can be established by other machines. If not, contact the wireless network's administrator.
Once you have checked your basic network connectivity, try to find out which service is not responding. Gather the address information of all network servers needed in your setup. Either look them up in the appropriate YaST module or ask your system administrator. The following list gives some typical network servers involved in a setup together with the symptoms of an outage.
A broken or malfunctioning name service affects the network's functionality in many ways. If the local machine relies on any network servers for authentication and these servers cannot be found because of name resolution issues, users would not even be able to log in. Machines in the network managed by a broken name server would not be able to “see” each other and communicate.
A malfunctioning or completely broken NTP service could affect Kerberos authentication and X server functionality.
If any application needs data stored in an NFS mounted directory, it
cannot start or function properly if this service was down or
misconfigured. In the worst case scenario, a user's personal desktop
configuration would not come up if their home directory containing the
.gconf subdirectory could not be found because of
a faulty NFS server.
If any application needs data stored in a directory on a faulty Samba server, it cannot start or function properly.
If your openSUSE Leap system relies on a faulty NIS server to provide the user data, users cannot log in to this machine.
If your openSUSE Leap system relies on a faulty LDAP server to provide the user data, users cannot log in to this machine.
Authentication will not work and login to any machine fails.
Users cannot print.
Check whether the network servers are running and whether your network setup allows you to establish a connection:
The debugging procedure described below only applies to a simple network server/client setup that does not involve any internal routing. It assumes both server and client are members of the same subnet without the need for additional routing.
Use ping
IP_ADDRESS/HOSTNAME
(replace with the host name or IP address of the server) to check whether
each one of them is up and responding to the network. If this command is
successful, it tells you that the host you were looking for is up and
running and that the name service for your network is configured
correctly.
If ping fails with destination host unreachable,
either your system or the desired server is not properly configured or
down. Check whether your system is reachable by running
ping IP address or
YOUR_HOSTNAME from another machine. If you
can reach your machine from another machine, it is the server that is
not running or not configured correctly.
If ping fails with unknown host, the name service is
not configured correctly or the host name used was incorrect. For
further checks on this matter, refer to
Step 4.b. If
ping still fails, either your network card is not configured correctly
or your network hardware is faulty.
Use host HOSTNAME to
check whether the host name of the server you are trying to connect to
is properly translated into an IP address and vice versa. If this
command returns the IP address of this host, the name service is up and
running. If the host command fails, check all network
configuration files relating to name and address resolution on your
host:
/etc/resolv.conf
This file is used to keep track of the name server and domain you are currently using. It can be modified manually or automatically adjusted by YaST or DHCP. Automatic adjustment is preferable. However, make sure that this file has the following structure and all network addresses and domain names are correct:
search FULLY_QUALIFIED_DOMAIN_NAME nameserver IPADDRESS_OF_NAMESERVER
This file can contain more than one name server address, but at least one of them must be correct to provide name resolution to your host. If needed, adjust this file using the YaST Network Settings module (Hostname/DNS tab).
If your network connection is handled via DHCP, enable DHCP to change host name and name service information by selecting (can be set globally for any interface or per interface) and in the YaST Network Settings module (Hostname/DNS tab).
/etc/nsswitch.conf
This file tells Linux where to look for name service information. It should look like this:
... hosts: files dns networks: files dns ...
The dns entry is vital. It tells Linux to use an
external name server. Normally, these entries are automatically
managed by YaST, but it would be prudent to check.
If all the relevant entries on the host are correct, let your system administrator check the DNS server configuration for the correct zone information. For detailed information about DNS, refer to Chapter 19, The Domain Name System. If you have made sure that the DNS configuration of your host and the DNS server are correct, proceed with checking the configuration of your network and network device.
If your system cannot establish a connection to a network server and you have excluded name service problems from the list of possible culprits, check the configuration of your network card.
Use the command ip addr show
NETWORK_DEVICE to check whether this device
was properly configured. Make sure that the inet
address with the netmask
(/MASK) is configured
correctly. An error in the IP address or a missing bit in your network
mask would render your network configuration unusable. If necessary,
perform this check on the server as well.
If the name service and network hardware are properly configured and
running, but some external network connections still get long time-outs
or fail entirely, use traceroute
FULLY_QUALIFIED_DOMAIN_NAME (executed as
root) to track the network
route these requests are taking. This command lists any gateway (hop)
that a request from your machine passes on its way to its destination.
It lists the response time of each hop and whether this hop is
reachable. Use a combination of traceroute and ping to track down the
culprit and let the administrators know.
Once you have identified the cause of your network trouble, you can resolve it yourself (if the problem is located on your machine) or let the system administrators of your network know about your findings so they can reconfigure the services or repair the necessary systems.
If you have a problem with network connectivity, narrow it down as described in Procedure 18.1, “How to Identify Network Problems”. If NetworkManager seems to be the culprit, proceed as follows to get logs providing hints on why NetworkManager fails:
Open a shell and log in as
root.
Restart the NetworkManager:
tux >sudosystemctl restart NetworkManager
Open a Web page, for example, http://www.opensuse.org as normal user to see, if you can connect.
Collect any information about the state of NetworkManager in
/var/log/NetworkManager.
For more information about NetworkManager, refer to Chapter 28, Using NetworkManager.
Data problems are when the machine may or may not boot properly but, in either case, it is clear that there is data corruption on the system and that the system needs to be recovered. These situations call for a backup of your critical data, enabling you to recover the system state from before your system failed.
Sometimes you need to perform a backup from an entire partition or even
hard disk. Linux comes with the dd tool which can create
an exact copy of your disk. Combined with gzip you save
some space.
Start a Shell as user root.
Select your source device. Typically this is something like
/dev/sda (labeled as
SOURCE).
Decide where you want to store your image (labeled as
BACKUP_PATH). It must be different from your
source device. In other words: if you make a backup from
/dev/sda, your image file must not to be stored
under /dev/sda.
Run the commands to create a compressed image file:
root # dd if=/dev/SOURCE | gzip > /BACKUP_PATH/image.gzRestore the hard disk with the following commands:
root # gzip -dc /BACKUP_PATH/image.gz | dd of=/dev/SOURCEIf you only need to back up a partition, replace the SOURCE placeholder with your respective partition. In this case, your image file can lie on the same hard disk, but on a different partition.
There are several reasons a system could fail to come up and run properly. A corrupted file system following a system crash, corrupted configuration files, or a corrupted boot loader configuration are the most common ones.
To help you to resolve these situations, openSUSE Leap contains a rescue system that you can boot. The rescue system is a small Linux system that can be loaded into a RAM disk and mounted as root file system, allowing you to access your Linux partitions from the outside. Using the rescue system, you can recover or modify any important aspect of your system.
Manipulate any type of configuration file.
Check the file system for defects and start automatic repair processes.
Access the installed system in a “change root” environment.
Check, modify, and re-install the boot loader configuration.
Recover from a badly installed device driver or unusable kernel.
Resize partitions using the parted command. Find more information about this tool at the GNU Parted Web site http://www.gnu.org/software/parted/parted.html.
The rescue system can be loaded from various sources and locations. The simplest option is to boot the rescue system from the original installation medium.
Insert the installation medium into your DVD drive.
Reboot the system.
At the boot screen, press F4 and choose . Then choose from the main menu.
Enter root at the Rescue: prompt. A
password is not required.
If your hardware setup does not include a DVD drive, you can boot the rescue
system from a network source. The following example applies to a remote boot
scenario—if using another boot medium, such as a DVD, modify the
info file accordingly and boot as you would for a
normal installation.
Enter the configuration of your PXE boot setup and add the lines
install=PROTOCOL://INSTSOURCE
and rescue=1. If you need to start the repair system,
use repair=1 instead. As with a normal installation,
PROTOCOL stands for any of the supported network
protocols (NFS, HTTP, FTP, etc.) and INSTSOURCE
for the path to your network installation source.
Boot the system using “Wake on LAN”.
Enter root at the Rescue: prompt. A
password is not required.
Once you have entered the rescue system, you can use the virtual consoles that can be reached with Alt–F1 to Alt–F6.
A shell and other useful utilities, such as the mount program, are
available in the /bin directory. The
/sbin directory contains important file and network
utilities for reviewing and repairing the file system. This directory also
contains the most important binaries for system maintenance, such as
fdisk, mkfs, mkswap,
mount, and shutdown,
ip and ss for maintaining the network.
The directory /usr/bin contains the vi editor, find,
less, and SSH.
To see the system messages, either use the command dmesg
or view the system log with journalctl.
As an example for a configuration that might be fixed using the rescue system, imagine you have a broken configuration file that prevents the system from booting properly. You can fix this using the rescue system.
To manipulate a configuration file, proceed as follows:
Start the rescue system using one of the methods described above.
To mount a root file system located under /dev/sda6
to the rescue system, use the following command:
tux >sudomount /dev/sda6 /mnt
All directories of the system are now located under
/mnt
Change the directory to the mounted root file system:
tux >sudocd /mnt
Open the problematic configuration file in the vi editor. Adjust and save the configuration.
Unmount the root file system from the rescue system:
tux >sudoumount /mnt
Reboot the machine.
Generally, file systems cannot be repaired on a running system. If you
encounter serious problems, you may not even be able to mount your root file
system and the system boot may end with a “kernel panic”. In
this case, the only way is to repair the system from the outside. The system
contains the utilities to check and repair the btrfs,
ext2, ext3, ext4,
xfs, dosfs, and vfat
file systems. Look for the command
fsck.FILESYSTEM.
For example, if you need a file system
check for btrfs, use fsck.btrfs.
If you need to access the installed system from the rescue system, you need to do this in a change root environment. For example, to modify the boot loader configuration, or to execute a hardware configuration utility.
To set up a change root environment based on the installed system, proceed as follows:
If you are using a LVM setup (refer to Section 5.2, “LVM Configuration” for more general details), import all existing volume groups in order to be able to find and mount the device(s):
rootvgimport -a
Run lsblk to check which node corresponds to the root
partition. It is /dev/sda2 in our example:
tux > lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 149,1G 0 disk
├─sda1 8:1 0 2G 0 part [SWAP]
├─sda2 8:2 0 20G 0 part /
└─sda3 8:3 0 127G 0 part
└─cr_home 254:0 0 127G 0 crypt /homeMount the root partition from the installed system:
tux >sudomount /dev/sda2 /mnt
Mount /proc, /dev, and
/sys partitions:
tux >sudomount -t proc none /mnt/proctux >sudomount --rbind /dev /mnt/devtux >sudomount --rbind /sys /mnt/sys
Now you can “change root” into the new environment, keeping
the bash shell:
tux > chroot /mnt /bin/bashFinally, mount the remaining partitions from the installed system:
tux > mount -a
Now you have access to the installed system. Before rebooting the system,
unmount the partitions with umount -a
and leave the “change root” environment with
exit.
Although you have full access to the files and applications of the
installed system, there are some limitations. The kernel that is running is
the one that was booted with the rescue system, not with the change root
environment. It only supports essential hardware and it is not possible to
add kernel modules from the installed system unless the kernel versions are
identical. Always check the version of the currently running (rescue)
kernel with uname -r and then find out if a matching
subdirectory exists in the /lib/modules directory in
the change root environment. If yes, you can use the installed modules,
otherwise you need to supply their correct versions on other media, such as
a flash disk. Most often the rescue kernel version differs from the
installed one — then you cannot simply access a sound card, for
example. It is also not possible to start a graphical user interface.
Also note that you leave the “change root” environment when you switch the console with Alt–F1 to Alt–F6.
Sometimes a system cannot boot because the boot loader configuration is corrupted. The start-up routines cannot, for example, translate physical drives to the actual locations in the Linux file system without a working boot loader.
To check the boot loader configuration and re-install the boot loader, proceed as follows:
Perform the necessary steps to access the installed system as described in Section 18.5.2.3, “Accessing the Installed System”.
Check that the GRUB 2 boot loader is installed on the system. If not,
install the package grub2 and run
tux >sudogrub2-install /dev/sda
Check whether the following files are correctly configured according to the GRUB 2 configuration principles outlined in Chapter 12, The Boot Loader GRUB 2 and apply fixes if necessary.
/etc/default/grub
/boot/grub2/device.map (optional file, only present
if created manually)
/boot/grub2/grub.cfg (this file is generated, do
not edit)
/etc/sysconfig/bootloader
Re-install the boot loader using the following command sequence:
tux >sudogrub2-mkconfig -o /boot/grub2/grub.cfg
Unmount the partitions, log out from the “change root” environment, and reboot the system:
tux > umount -a
exit
rebootA kernel update may introduce a new bug which can impact the operation of your system. For example a driver for a piece of hardware in your system may be faulty, which prevents you from accessing and using it. In this case, revert to the last working kernel (if available on the system) or install the original kernel from the installation media.
To prevent failures to boot after a faulty kernel update, use the kernel
multiversion feature and tell libzypp which
kernels you want to keep after the update.
For example to always keep the last two kernels and the currently running one, add
multiversion.kernels = latest,latest-1,running
to the /etc/zypp/zypp.conf file. See
Chapter 6, Installing Multiple Kernel Versions for more information.
A similar case is when you need to re-install or update a broken driver for a device not supported by openSUSE Leap. For example when a hardware vendor uses a specific device, such as a hardware RAID controller, which needs a binary driver to be recognized by the operating system. The vendor typically releases a Driver Update Disk (DUD) with the fixed or updated version of the required driver.
In both cases you need to access the installed system in the rescue mode and fix the kernel related problem, otherwise the system may fail to boot correctly:
Boot from the openSUSE Leap installation media.
If you are recovering after a faulty kernel update, skip this step. If you need to use a driver update disk (DUD), press F6 to load the driver update after the boot menu appears, and choose the path or URL to the driver update and confirm with .
Choose from the boot menu and press Enter. If you chose to use DUD, you will be asked to specify where the driver update is stored.
Enter root at the Rescue: prompt. A
password is not required.
Manually mount the target system and “change root” into the new environment. For more information, see Section 18.5.2.3, “Accessing the Installed System”.
If using DUD, install/re-install/update the faulty device driver package. Always make sure the installed kernel version exactly matches the version of the driver you are installing.
If fixing faulty kernel update installation, you can install the original kernel from the installation media with the following procedure.
Identify your DVD device with hwinfo --cdrom and
mount it with mount /dev/sr0 /mnt.
Navigate to the directory where your kernel files are stored on the DVD,
for example cd /mnt/suse/x86_64/.
Install required kernel-*,
kernel-*-base, and
kernel-*-extra packages of your flavor with the
rpm -i command.
Update configuration files and reinitialize the boot loader if needed. For more information, see Section 18.5.2.4, “Modifying and Re-installing the Boot Loader”.
Remove any bootable media from the system drive and reboot.
This appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
systemd Daemonjournalctl: Query the systemd Journaludev/dev Directoryuevents and udevudev Daemonudev Rulesudevcachemgr.cgi)wicked architecturesystemd Target Unitsulimit: Setting Resources for the Userrpm -q -i wget/etc/resolv.conf/etc/hosts/etc/networks/etc/host.conf/etc/nsswitch.confulimit: Settings in ~/.bashrcudev Rulesrpcclient to Request a Windows Server 2012 Share SnapshotVirtualHost EntriesVirtualHost DirectivesVirtualHost DirectivesVirtualHost ConfigurationsquidclientCopyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
This manual gives you a general understanding of openSUSE® Leap. It is intended mainly for system administrators and home users with basic system administration knowledge. Check out the various parts of this manual for a selection of applications needed in everyday life and in-depth descriptions of advanced installation and configuration scenarios.
Learn about advanced adminstrations tasks such as using YaST in text mode and managing software from the command line. Find out how to do system roll-backs with Snapper and how to use advanced storage techniques on openSUSE Leap.
Get an introduction to the components of your Linux system and a deeper understanding of their interaction.
Learn how to configure the various network and file services that come with openSUSE Leap.
Get an introduction to mobile computing with openSUSE Leap, get to know the various options for wireless computing and power management.
Documentation for our products is available at http://doc.opensuse.org/, where you can also find the latest updates, and browse or download the documentation in various formats.
In addition, the product documentation
is usually available in your installed system under
/usr/share/doc/manual.
The following documentation is available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Several feedback channels are available:
To report bugs for openSUSE Leap, go to https://bugzilla.opensuse.org/, log in, and click .
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.com. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a concise
description of the problem and refer to the respective section number and
page (or URL).
The following notices and typographical conventions are used in this documentation:
/etc/passwd: directory names and file names
PLACEHOLDER: replace PLACEHOLDER with the actual value
PATH: the environment variable PATH
ls, --help: commands, options, and
parameters
user: users or groups
package name : name of a package
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
Commands that must be run with root privileges. Often you can also
prefix these commands with the sudo command to run them
as non-privileged user.
root #commandtux >sudocommand
Commands that can be run by non-privileged users.
tux >command
Notices
Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.
Important information you should be aware of before proceeding.
Additional information, for example about differences in software versions.
Helpful information, like a guideline or a piece of practical advice.
This documentation is written in SUSEDoc, a subset of
DocBook 5.
The XML source files were validated by jing (see
https://code.google.com/p/jing-trang/), processed by
xsltproc, and converted into XSL-FO using a customized
version of Norman Walsh's stylesheets. The final PDF is formatted through FOP
from
Apache
Software Foundation. The open source tools and the environment used to
build this documentation are provided by the DocBook Authoring and Publishing
Suite (DAPS). The project's home page can be found at
https://github.com/openSUSE/daps.
The XML source code of this documentation can be found at https://github.com/SUSE/doc-sle.
The source code of openSUSE Leap is publicly available. Refer to http://en.opensuse.org/Source_code for download links and more information.
With a lot of voluntary commitment, the developers of Linux cooperate on a global scale to promote the development of Linux. We thank them for their efforts—this distribution would not exist without them. Special thanks, of course, goes to Linus Torvalds.
This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.
This chapter describes Zypper and RPM, two command line tools for managing
software. For a definition of the terminology used in this context (for
example, repository, patch, or
update) refer to
Section 11.1, “Definition of Terms”.
Being able to do file system snapshots providing the ability to do
rollbacks on Linux is a feature that was often requested in the past.
Snapper, with the Btrfs file system or thin-provisioned
LVM volumes now fills that gap.
Btrfs, a new copy-on-write file system for Linux,
supports file system snapshots (a copy of the state of a subvolume at a
certain point of time) of subvolumes (one or more separately mountable file
systems within each physical partition). Snapshots are also supported on
thin-provisioned LVM volumes formatted with XFS, Ext4 or Ext3. Snapper lets
you create and manage these snapshots. It comes with a command line and a
YaST interface. Starting with openSUSE Leap it is also possible
to boot from Btrfs snapshots—see Section 3.3, “System Rollback by Booting from Snapshots” for more information.
Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.
openSUSE Leap supports two different kinds of VNC sessions: One-time sessions that “live” as long as the VNC connection from the client is kept up, and persistent sessions that “live” until they are explicitly terminated.
Sophisticated system configurations require specific disk setups. All common partitioning tasks can be done with YaST. To get persistent device naming with block devices, use the block devices below /dev/disk/by-id or /dev/disk/by-uuid. Logical Volume Management (LVM) is a disk partitioning scheme t…
openSUSE Leap supports the parallel installation of multiple kernel versions. When installing a second kernel, a boot entry and an initrd are automatically created, so no further manual configuration is needed. When rebooting the machine, the newly added kernel is available as an additional boot option.
Using this functionality, you can safely test kernel updates while being able to always fall back to the proven former kernel. To do this, do not use the update tools (such as the YaST Online Update or the updater applet), but instead follow the process described in this chapter.
This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.
This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.
YaST in text mode uses the ncurses library to provide an easy pseudo-graphical user interface. The ncurses library is installed by default. The minimum supported size of the terminal emulator in which to run YaST is 80x25 characters.
When you start YaST in text mode, the YaST control center appears (see Figure 1.1). The main window consists of three areas. The left frame features the categories to which the various modules belong. This frame is active when YaST is started and therefore it is marked by a bold white border. The active category is selected. The right frame provides an overview of the modules available in the active category. The bottom frame contains the buttons for and .
When you start the YaST control center, the category is selected automatically. Use ↓ and ↑ to change the category. To select a module from the category, activate the right frame with → and then use ↓ and ↑ to select the module. Keep the arrow keys pressed to scroll through the list of available modules. The selected module is selected. Press Enter to start the active module.
Various buttons or selection fields in the module contain a highlighted letter (yellow by default). Use Alt–highlighted_letter to select a button directly instead of navigating there with →|. Exit the YaST control center by pressing Alt–Q or by selecting and pressing Enter.
If a YaST dialog gets corrupted or distorted (for example, while resizing the window), press Ctrl–L to refresh and restore its contents.
YaST in text mode has a set of advanced key combinations.
Show a list of advanced hotkeys.
Change color schema.
Quit the application.
Refresh screen.
Show a list of advanced hotkeys.
Dump dialog to the log file as a screenshot.
Open YDialogSpy to see the widget hierarchy.
If your window manager uses global Alt combinations, the Alt combinations in YaST might not work. Keys like Alt or Shift can also be occupied by the settings of the terminal.
Alt shortcuts can be executed with Esc instead of Alt. For example, Esc–H replaces Alt–H. (First press Esc, then press H.)
If the Alt and Shift combinations are occupied by the window manager or the terminal, use the combinations Ctrl–F (forward) and Ctrl–B (backward) instead.
The function keys (F1 ... F12) are also used for functions. Certain function keys might be occupied by the terminal and may not be available for YaST. However, the Alt key combinations and function keys should always be fully available on a pure text console.
Besides the text mode interface, YaST provides a pure command line interface. To get a list of YaST command line options, enter:
tux >sudoyast -h
To save time, the individual YaST modules can be started directly. To start a module, enter:
tux >sudoyast <module_name>
View a list of all module names available on your system with yast
-l or yast --list. Start the network module,
for example, with yast lan.
If you know a package name and the package is provided by any of your
active installation repositories, you can use the command line option
-i to install the package:
tux >sudoyast -i <package_name>
or
tux >sudoyast --install <package_name>
PACKAGE_NAME can be a single short package name
(for example gvim) installed with
dependency checking, or the full path to an RPM package
which is installed without dependency checking.
If you need a command line based software management utility with functionality beyond what YaST provides, consider using Zypper. This utility uses the same software management library that is also the foundation for the YaST package manager. The basic usage of Zypper is covered in Section 2.1, “Using Zypper”.
To use YaST functionality in scripts, YaST provides command line support for individual modules. Not all modules have command line support. To display the available options of a module, enter:
tux >sudoyast <module_name> help
If a module does not provide command line support, the module is started in text mode and the following message appears:
This YaST module does not support the command line interface.
This chapter describes Zypper and RPM, two command line tools for managing
software. For a definition of the terminology used in this context (for
example, repository, patch, or
update) refer to
Section 11.1, “Definition of Terms”.
Zypper is a command line package manager for installing, updating and removing packages as well as for managing repositories. It is especially useful for accomplishing remote software management tasks or managing software from shell scripts.
The general syntax of Zypper is:
zypper[--global-options]COMMAND[--command-options][arguments]
The components enclosed in brackets are not required. See zypper
help for a list of general options and all commands. To get help
for a specific command, type zypper help
COMMAND.
The simplest way to execute Zypper is to type its name, followed by a command. For example, to apply all needed patches to the system, use:
tux >sudozypper patch
Additionally, you can choose from one or more global options by typing them immediately before the command:
tux >sudozypper --non-interactive patch
In the above example, the option --non-interactive means
that the command is run without asking anything (automatically applying
the default answers).
To use options that are specific to a particular command, type them immediately after the command:
tux >sudozypper patch --auto-agree-with-licenses
In the above example, --auto-agree-with-licenses is used
to apply all needed patches to a system without you being asked to
confirm any licenses. Instead, license will be accepted automatically.
Some commands require one or more arguments. For example, when using the
command install, you need to specify which package or
which packages you want to install:
tux >sudozypper install mplayer
Some options also require a single argument. The following command will list all known patterns:
tux > zypper search -t pattern
You can combine all of the above. For example, the following command will
install the mc and vim packages from
the factory repository while being verbose:
tux >sudozypper -v install --from factory mc vim
The --from option makes sure to keep all repositories
enabled (for solving any dependencies) while requesting the package from the
specified repository.
Most Zypper commands have a dry-run option that does a
simulation of the given command. It can be used for test purposes.
tux >sudozypper remove --dry-run MozillaFirefox
Zypper supports the global --userdata
STRING option. You can specify a string
with this option, which gets written to Zypper's log files and plug-ins
(such as the Btrfs plug-in). It can be used to mark and identify
transactions in log files.
tux >sudozypper --userdata STRING patch
To install or remove packages, use the following commands:
tux >sudozypper install PACKAGE_NAME sudo zypper remove PACKAGE_NAME
Do not remove mandatory system packages like glibc , zypper , kernel . If they are removed, the system can become unstable or stop working altogether.
There are various ways to address packages with the commands
zypper install and zypper remove.
tux >sudozypper install MozillaFirefox
tux >sudozypper install MozillaFirefox-52.2
tux >sudozypper install mozilla:MozillaFirefox
Where mozilla is the alias of the repository from
which to install.
You can select all packages that have names starting or ending with a certain string. Use wild cards with care, especially when removing packages. The following command will install all packages starting with “Moz”:
tux >sudozypper install 'Moz*'
-debuginfo Packages
When debugging a problem, you sometimes need to temporarily install a
lot of -debuginfo packages which give you more
information about running processes. After your debugging session
finishes and you need to clean the environment, run the following:
tux >sudozypper remove '*-debuginfo'
For example, if you want to install a Perl module without knowing the name of the package, capabilities come in handy:
tux >sudozypper install firefox
Together with a capability, you can specify a hardware architecture and a version:
The name of the desired hardware architecture is appended to the
capability after a full stop. For example, to specify the AMD64/Intel 64
architectures (which in Zypper is named x86_64),
use:
tux >sudozypper install 'firefox.x86_64'
Versions must be appended to the end of the string and must be
preceded by an operator: < (lesser than),
<= (lesser than or equal), =
(equal), >= (greater than or equal),
> (greater than).
tux >sudozypper install 'firefox>=52.2'
You can also combine a hardware architecture and version requirement:
tux >sudozypper install 'firefox.x86_64>=52.2'
You can also specify a local or remote path to a package:
tux >sudozypper install /tmp/install/MozillaFirefox.rpmtux >sudozypper install http://download.example.com/MozillaFirefox.rpm
To install and remove packages simultaneously, use the
+/- modifiers. To install
emacs
and simultaneously remove
vim
, use:
tux >sudozypper install emacs -vim
To remove emacs and simultaneously install vim , use:
tux >sudozypper remove emacs +vim
To prevent the package name starting with the - being
interpreted as a command option, always use it as the second argument. If
this is not possible, precede it with --:
tux >sudozypper install -emacs +vim # Wrongtux >sudozypper install vim -emacs # Correcttux >sudozypper install -- -emacs +vim # Correcttux >sudozypper remove emacs +vim # Correct
If (together with a certain package), you automatically want to remove any
packages that become unneeded after removing the specified package, use the
--clean-deps option:
tux >sudozypper rm PACKAGE_NAME --clean-deps
By default, Zypper asks for a confirmation before installing or removing a
selected package, or when a problem occurs. You can override this behavior
using the --non-interactive option. This option must be
given before the actual command (install,
remove, and patch), as can be seen in
the following:
tux >sudozypper--non-interactiveinstall PACKAGE_NAME
This option allows the use of Zypper in scripts and cron jobs.
To install the corresponding source package of a package, use:
tux > zypper source-install PACKAGE_NAME
When executed as root, the default location to install source
packages is /usr/src/packages/ and
~/rpmbuild when run as user. These values can be
changed in your local rpm configuration.
This command will also install the build dependencies of the specified
package. If you do not want this, add the switch -D:
tux >sudozypper source-install -D PACKAGE_NAME
To install only the build dependencies use -d.
tux >sudozypper source-install -d PACKAGE_NAME
Of course, this will only work if you have the repository with the source packages enabled in your repository list (it is added by default, but not enabled). See Section 2.1.5, “Managing Repositories with Zypper” for details on repository management.
A list of all source packages available in your repositories can be obtained with:
tux > zypper search -t srcpackageYou can also download source packages for all installed packages to a local directory. To download source packages, use:
tux > zypper source-download
The default download directory is
/var/cache/zypper/source-download. You can change it
using the --directory option. To only show missing or
extraneous packages without downloading or deleting anything, use the
--status option. To delete extraneous source packages, use
the --delete option. To disable deleting, use the
--no-delete option.
Normally you can only install or refresh packages from enabled
repositories. The --plus-content
TAG option helps you specify
repositories to be refreshed, temporarily enabled during the current Zypper
session, and disabled after it completes.
For example, to enable repositories that may provide additional
-debuginfo or -debugsource
packages, use --plus-content debug. You can specify this
option multiple times.
To temporarily enable such 'debug' repositories to install a specific
-debuginfo package, use the option as follows:
tux >sudozypper --plus-content debug \ install "debuginfo(build-id)=eb844a5c20c70a59fc693cd1061f851fb7d046f4"
The build-id string is reported by
gdb for missing debuginfo packages.
To verify whether all dependencies are still fulfilled and to repair missing dependencies, use:
tux > zypper verifyIn addition to dependencies that must be fulfilled, some packages “recommend” other packages. These recommended packages are only installed if actually available and installable. In case recommended packages were made available after the recommending package has been installed (by adding additional packages or hardware), use the following command:
tux >sudozypper install-new-recommends
This command is very useful after plugging in a Web cam or Wi-Fi device. It will install drivers for the device and related software, if available. Drivers and related software are only installable if certain hardware dependencies are fulfilled.
There are three different ways to update software using Zypper: by
installing patches, by installing a new version of a package or by updating
the entire distribution. The latter is achieved with zypper
dist-upgrade. Upgrading openSUSE Leap is discussed in
Chapter 14, Upgrading the System and System Changes.
To install all officially released patches that apply to your system, run:
tux >sudozypper patch
All patches available from repositories configured on your computer are
checked for their relevance to your installation. If they are relevant (and
not classified as optional or
feature), they are installed immediately.
If a patch that is about to be installed includes changes that require a system reboot, you will be warned before.
The plain zypper patch command does not apply patches
from third party repositories. To update also the third party repositories,
use the with-update command option as follows:
tux >sudozypper patch --with update
To install also optional patches, use:
tux >sudozypper patch --with-optional
To install all patches relating to a specific Bugzilla issue, use:
tux >sudozypper patch --bugzilla=NUMBER
To install all patches relating to a specific CVE database entry, use:
tux >sudozypper patch --cve=NUMBER
For example, to install a security patch with the CVE number
CVE-2010-2713, execute:
tux >sudozypper patch --cve=CVE-2010-2713
To install only patches which affect Zypper and the package management itself, use:
tux >sudozypper patch --updatestack-only
Bear in mind that other command options that would also update other
repositories will be dropped if you use the
updatestack-only command option.
To find out whether patches are available, Zypper allows viewing the following information:
To list the number of needed patches (patches that apply to your system
but are not yet installed), use patch-check:
tux > zypper patch-check
Loading repository data...
Reading installed packages...
5 patches needed (1 security patch)
This command can be combined with the
--updatestack-only option to list only the patches
which affect Zypper and the package management itself.
To list all needed patches (patches that apply to your system but are
not yet installed), use list-patches:
tux > zypper list-patches
Repository | Name | Category | Severity | Interactive | Status | S>
-----------+-------------------+----------+----------+-------------+--------+-->
Update | openSUSE-2017-828 | security | moderate | --- | needed | S>
Found 1 applicable patch:
1 patch needed (1 security patch)
To list all patches available for openSUSE Leap, regardless of whether
they are already installed or apply to your installation, use
zypper patches.
It is also possible to list and install patches relevant to specific
issues. To list specific patches, use the zypper
list-patches command with the following options:
To list all needed patches that relate to Bugzilla issues, use the
option --bugzilla.
To list patches for a specific bug, you can also specify a bug number:
--bugzilla=NUMBER. To search
for patches relating to multiple Bugzilla issues, add commas between the
bug numbers, for example:
tux > zypper list-patches --bugzilla=972197,956917
To list all needed patches that relate to an entry in the CVE database
(Common Vulnerabilities and Exposures), use the option
--cve.
To list patches for a specific CVE database entry, you can also specify
a CVE number: --cve=NUMBER.
To search for patches relating to multiple CVE database entries, add
commas between the CVE numbers, for example:
tux > zypper list-patches --bugzilla=CVE-2016-2315,CVE-2016-2324
To list all patches regardless of whether they are needed, use the option
--all additionally. For example, to list all patches with
a CVE number assigned, use:
tux > zypper list-patches --all --cve
Issue | No. | Patch | Category | Severity | Status
------+---------------+-------------------+-------------+-----------+----------
cve | CVE-2015-0287 | SUSE-SLE-Module.. | recommended | moderate | needed
cve | CVE-2014-3566 | SUSE-SLE-SERVER.. | recommended | moderate | not needed
[...]
If a repository contains only new packages, but does not provide patches,
zypper patch does not show any effect. To update
all installed packages with newer available versions (while maintaining
system integrity), use:
tux >sudozypper update
To update individual packages, specify the package with either the update or install command:
tux >sudozypper update PACKAGE_NAME sudo zypper install PACKAGE_NAME
A list of all new installable packages can be obtained with the command:
tux > zypper list-updatesNote that this command only lists packages that match the following criteria:
has the same vendor like the already installed package,
is provided by repositories with at least the same priority than the already installed package,
is installable (all dependencies are satisfied).
A list of all new available packages (regardless whether installable or not) can be obtained with:
tux >sudozypper list-updates --all
To find out why a new package cannot be installed, use the zypper
install or zypper update command as described
above.
Whenever you remove a repository from Zypper or upgrade your system, some packages can get in an “orphaned” state. These orphaned packages belong to no active repository anymore. The following command gives you a list of these:
tux >sudozypper packages --orphaned
With this list, you can decide if a package is still needed or can be removed safely.
When patching, updating or removing packages, there may be running processes
on the system which continue to use files having been deleted by the update
or removal. Use zypper ps to list processes using deleted
files. In case the process belongs to a known service, the service name is
listed, making it easy to restart the service. By default zypper
ps shows a table:
tux > zypper ps
PID | PPID | UID | User | Command | Service | Files
------+------+-----+-------+--------------+--------------+-------------------
814 | 1 | 481 | avahi | avahi-daemon | avahi-daemon | /lib64/ld-2.19.s->
| | | | | | /lib64/libdl-2.1->
| | | | | | /lib64/libpthrea->
| | | | | | /lib64/libc-2.19->
[...]| PID: ID of the process |
| PPID: ID of the parent process |
| UID: ID of the user running the process |
| Login: Login name of the user running the process |
| Command: Command used to execute the process |
| Service: Service name (only if command is associated with a system service) |
| Files: The list of the deleted files |
The output format of zypper ps can be controlled as
follows:
zypper ps-s
Create a short table not showing the deleted files.
tux > zypper ps -s
PID | PPID | UID | User | Command | Service
------+------+------+---------+--------------+--------------
814 | 1 | 481 | avahi | avahi-daemon | avahi-daemon
817 | 1 | 0 | root | irqbalance | irqbalance
1567 | 1 | 0 | root | sshd | sshd
1761 | 1 | 0 | root | master | postfix
1764 | 1761 | 51 | postfix | pickup | postfix
1765 | 1761 | 51 | postfix | qmgr | postfix
2031 | 2027 | 1000 | tux | bash |zypper ps-ss
Show only processes associated with a system service.
PID | PPID | UID | User | Command | Service ------+------+------+---------+--------------+-------------- 814 | 1 | 481 | avahi | avahi-daemon | avahi-daemon 817 | 1 | 0 | root | irqbalance | irqbalance 1567 | 1 | 0 | root | sshd | sshd 1761 | 1 | 0 | root | master | postfix 1764 | 1761 | 51 | postfix | pickup | postfix 1765 | 1761 | 51 | postfix | qmgr | postfix
zypper ps-sss
Only show system services using deleted files.
avahi-daemon irqbalance postfix sshd
zypper ps--print "systemctl status
%s"
Show the commands to retrieve status information for services which might need a restart.
systemctl status avahi-daemon systemctl status irqbalance systemctl status postfix systemctl status sshd
For more information about service handling refer to
Chapter 10, The systemd Daemon.
All installation or patch commands of Zypper rely on a list of known repositories. To list all repositories known to the system, use the command:
tux > zypper reposThe result will look similar to the following output:
tux > zypper repos
# | Alias | Name | Enabled | GPG Check | Refresh
---+-----------------------+------------------+---------+-----------+--------
1 | Leap-42.3-Main | Main (OSS) | Yes | (r ) Yes | Yes
2 | Leap-42.3-Update | Update (OSS) | Yes | (r ) Yes | Yes
3 | Leap-42.3-NOSS | Main (NON-OSS) | Yes | (r ) Yes | Yes
4 | Leap-42.3-Update-NOSS | Update (NON-OSS) | Yes | (r ) Yes | Yes
[...]
When specifying repositories in various commands, an alias, URI or
repository number from the zypper repos command output
can be used. A repository alias is a short version of the repository name
for use in repository handling commands. Note that the repository numbers
can change after modifying the list of repositories. The alias will never
change by itself.
By default, details such as the URI or the priority of the repository are not displayed. Use the following command to list all details:
tux > zypper repos -dTo add a repository, run
tux >sudozypper addrepo URI ALIAS
URI can either be an Internet repository, a network resource, a directory or a CD or DVD (see http://en.opensuse.org/openSUSE:Libzypp_URIs for details). The ALIAS is a shorthand and unique identifier of the repository. You can freely choose it, with the only exception that it needs to be unique. Zypper will issue a warning if you specify an alias that is already in use.
zypper enables you to fetch changes in packages from
configured repositories. To fetch the changes, run:
tux >sudozypper refresh
zypper
By default, some commands perform refresh
automatically, so you do not need to run the command explicitly.
The refresh command enables you to view changes also in
disabled repositories, by using the --plus-content
option:
tux >sudozypper --plus-content refresh
This option fetches changes in repositories, but keeps the disabled repositories in the same state—disabled.
To remove a repository from the list, use the command
zypper removerepo together with the alias or number of
the repository you want to delete. For example, to remove the repository
Leap-42.3-NOSS from Example 2.1, “Zypper—List of Known Repositories”,
use one of the following commands:
tux >sudozypper removerepo 4tux >sudozypper removerepo "Leap-42.3-NOSS"
Enable or disable repositories with zypper modifyrepo.
You can also alter the repository's properties (such as refreshing
behavior, name or priority) with this command. The following command will
enable the repository named updates, turn on
auto-refresh and set its priority to 20:
tux >sudozypper modifyrepo -er -p 20 'updates'
Modifying repositories is not limited to a single repository—you can also operate on groups:
-a: all repositories |
-l: local repositories |
-t: remote repositories |
-m TYPE: repositories
of a certain type (where TYPE can be one of the
following: http, https, ftp,
cd, dvd, dir, file,
cifs, smb, nfs, hd,
iso) |
To rename a repository alias, use the renamerepo
command. The following example changes the alias from Mozilla
Firefox to firefox:
tux >sudozypper renamerepo 'Mozilla Firefox' firefox
Zypper offers various methods to query repositories or packages. To get lists of all products, patterns, packages or patches available, use the following commands:
tux >zypper productstux >zypper patternstux >zypper packagestux >zypper patches
To query all repositories for certain packages, use
search. To get information regarding particular packages,
use the info command.
The zypper search command works on package names, or,
optionally, on package summaries and descriptions. Strings wrapped in
/ are interpreted as regular expressions. By default,
the search is not case-sensitive.
fire
tux > zypper search "fire"MozillaFirefox
tux > zypper search --match-exact "MozillaFirefox"tux > zypper search -d firetux > zypper search -u firefir not
followed be e
tux > zypper se "/fir[^e]/"
To search for packages which provide a special capability, use the command
what-provides. For example, if you want to know which
package provides the Perl module SVN::Core, use the
following command:
tux > zypper what-provides 'perl(SVN::Core)'
The what-provides
PACKAGE_NAME is similar to
rpm -q --whatprovides
PACKAGE_NAME, but RPM is only able to query the
RPM database (that is the database of all installed packages). Zypper, on
the other hand, will tell you about providers of the capability from any
repository, not only those that are installed.
To query single packages, use info with an exact package
name as an argument. This displays detailed information about a package. In
case the package name does not match any package name from repositories,
the command outputs detailed information for non-package matches. If you
request a specific type (by using the -t option) and the
type does not exist, the command outputs other available matches but
without detailed information.
If you specify a source package, the command displays binary packages built from the source package. If you specify a binary package, the command outputs the source packages used to build the binary package.
To also show what is required/recommended by the package, use the options
--requires and --recommends:
tux > zypper info --requires MozillaFirefoxSUSE products are generally supported for 10 years. Often, you can extend that standard life cycle by using the extended support offerings of SUSE which add 3 years of support. Depending on your product find the exact support life cycle at https://www.suse.com/lifecycle.
To check the lifecycle of your product and supported package, use the
zypper lifecycle command as shown below:
root #zypper lifecycleProduct end of support Codestream: SUSE Linux Enterprise Server 15 2028-04-23 SUSE Linux Enterprise Server 15 n/a* Module end of support Basesystem Module 2021-07-31 No packages with end of support different from product. *) See https://www.suse.com/lifecycle for latest information
Zypper now comes with a configuration file, allowing you to permanently
change Zypper's behavior (either system-wide or user-specific). For
system-wide changes, edit /etc/zypp/zypper.conf. For
user-specific changes, edit ~/.zypper.conf. If
~/.zypper.conf does not yet exist, you can use
/etc/zypp/zypper.conf as a template: copy it to
~/.zypper.conf and adjust it to your liking. Refer to
the comments in the file for help about the available options.
If you have trouble accessing packages from configured repositories (for example, Zypper cannot find a certain package even though you know it exists in one the repositories), refreshing the repositories may help:
tux >sudozypper refresh
If that does not help, try
tux >sudozypper refresh -fdb
This forces a complete refresh and rebuild of the database, including a forced download of raw metadata.
If the Btrfs file system is used on the root partition and
snapper is installed, Zypper automatically calls
snapper when committing changes to the file system to
create appropriate file system snapshots. These snapshots can be used to revert any changes made by Zypper. See Chapter 3, System Recovery and Snapshot Management with Snapper for
more information.
For more information on managing software from the command line, enter
zypper help, zypper help
COMMAND or refer to the
zypper(8) man page. For a complete and detailed command
reference, cheat sheets with the most important
commands, and information on how to use Zypper in scripts and applications,
refer to http://en.opensuse.org/SDB:Zypper_usage. A
list of software changes for the latest openSUSE Leap version can be found
at http://en.opensuse.org/openSUSE:Zypper versions.
RPM (RPM Package Manager) is used for managing software packages. Its main
commands are rpm and rpmbuild. The
powerful RPM database can be queried by the users, system administrators and
package builders for detailed information about the installed software.
Essentially, rpm has five modes: installing, uninstalling
(or updating) software packages, rebuilding the RPM database, querying RPM
bases or individual RPM archives, integrity checking of packages and signing
packages. rpmbuild can be used to build installable
packages from pristine sources.
Installable RPM archives are packed in a special binary format. These
archives consist of the program files to install and certain meta information
used during the installation by rpm to configure the
software package or stored in the RPM database for documentation purposes.
RPM archives normally have the extension .rpm.
For several packages, the components needed for software development
(libraries, headers, include files, etc.) have been put into separate
packages. These development packages are only needed if you want to compile
software yourself (for example, the most recent GNOME packages). They can
be identified by the name extension -devel, such as the
packages alsa-devel and
gimp-devel.
RPM packages have a GPG signature. To verify the signature of an RPM
package, use the command rpm --checksig
PACKAGE-1.2.3.rpm to determine whether the
package originates from SUSE or from another trustworthy facility. This is
especially recommended for update packages from the Internet.
Normally, the installation of an RPM archive is quite simple: rpm
-i PACKAGE.rpm. With this command the
package is installed, but only if its dependencies are fulfilled and if
there are no conflicts with other packages. With an error message,
rpm requests those packages that need to be installed to
meet dependency requirements. In the background, the RPM database ensures
that no conflicts arise—a specific file can only belong to one
package. By choosing different options, you can force rpm
to ignore these defaults, but this is only for experts. Otherwise, you risk
compromising the integrity of the system and possibly jeopardize the ability
to update the system.
The options -U or --upgrade and
-F or --freshen can be used to update a
package (for example, rpm -F
PACKAGE.rpm). This command removes the files of
the old version and immediately installs the new files. The difference
between the two versions is that -U installs packages that
previously did not exist in the system, while -F merely
updates previously installed packages. When updating, rpm
updates configuration files carefully using the following strategy:
If a configuration file was not changed by the system administrator,
rpm installs the new version of the appropriate file.
No action by the system administrator is required.
If a configuration file was changed by the system administrator before the
update, rpm saves the changed file with the extension
.rpmorig or .rpmsave (backup
file) and installs the version from the new package. This is done only if
the originally installed file and the newer version are different. If this is
the case, compare the backup file (.rpmorig or
.rpmsave) with the newly installed file and make your
changes again in the new file. Afterward, delete all
.rpmorig and .rpmsave files to
avoid problems with future updates.
.rpmnew files appear if the configuration file
already exists and if the noreplace
label was specified in the .spec file.
Following an update, .rpmsave and
.rpmnew files should be removed after comparing them,
so they do not obstruct future updates. The .rpmorig
extension is assigned if the file has not previously been recognized by the
RPM database.
Otherwise, .rpmsave is used. In other words,
.rpmorig results from updating from a foreign format to
RPM. .rpmsave results from updating from an older RPM
to a newer RPM. .rpmnew does not disclose any
information to whether the system administrator has made any changes to the
configuration file. A list of these files is available in
/var/adm/rpmconfigcheck. Some configuration files (like
/etc/httpd/httpd.conf) are not overwritten to allow
continued operation.
The -U switch is not just an
equivalent to uninstalling with the -e option and
installing with the -i option. Use -U
whenever possible.
To remove a package, enter rpm -e
PACKAGE. This command only deletes the package if
there are no unresolved dependencies. It is theoretically impossible to
delete Tcl/Tk, for example, as long as another application requires it. Even
in this case, RPM calls for assistance from the database. If such a deletion
is, for whatever reason, impossible (even if no
additional dependencies exist), it may be helpful to rebuild the RPM
database using the option --rebuilddb.
Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM onto an old RPM results in a completely new RPM. It is not necessary to have a copy of the old RPM because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs.
The makedeltarpm and applydelta
binaries are part of the delta RPM suite (package
deltarpm) and help you create and apply delta RPM
packages. With the following commands, you can create a delta RPM called
new.delta.rpm. The following command assumes that
old.rpm and new.rpm are present:
tux >sudomakedeltarpm old.rpm new.rpm new.delta.rpm
Using applydeltarpm, you can reconstruct the new RPM from
the file system if the old package is already installed:
tux >sudoapplydeltarpm new.delta.rpm new.rpm
To derive it from the old RPM without accessing the file system, use the
-r option:
tux >sudoapplydeltarpm -r old.rpm new.delta.rpm new.rpm
See /usr/share/doc/packages/deltarpm/README for
technical details.
With the -q option rpm initiates
queries, making it possible to inspect an RPM archive (by adding the option
-p) and to query the RPM database of installed packages.
Several switches are available to specify the type of information required.
See Table 2.1, “The Most Important RPM Query Options”.
|
|
Package information |
|
|
File list |
|
|
Query the package that contains the file FILE (the full path must be specified with FILE) |
|
|
File list with status information (implies |
|
|
List only documentation files (implies |
|
|
List only configuration files (implies |
|
|
File list with complete details (to be used with |
|
|
List features of the package that another package can request with
|
|
|
Capabilities the package requires |
|
|
Installation scripts (preinstall, postinstall, uninstall) |
For example, the command rpm -q -i wget displays the
information shown in Example 2.2, “rpm -q -i wget”.
rpm -q -i wget #Name : wget Version : 1.14 Release : 10.3 Architecture: x86_64 Install Date: Fri 14 Jul 2017 04:09:58 PM CEST Group : Productivity/Networking/Web/Utilities Size : 2046452 License : GPL-3.0+ Signature : RSA/SHA256, Wed 10 May 2017 02:40:21 AM CEST, Key ID b88b2fd43dbdc284 Source RPM : wget-1.14-10.3.src.rpm Build Date : Wed 10 May 2017 02:40:12 AM CEST Build Host : lamb55 Relocations : (not relocatable) Packager : http://bugs.opensuse.org Vendor : openSUSE URL : http://www.gnu.org/software/wget/ Summary : A Tool for Mirroring FTP and HTTP Servers Description : Wget enables you to retrieve WWW documents or FTP files from a server. This can be done in script files or via the command line. Distribution: openSUSE Leap 42.3
The option -f only works if you specify the complete file
name with its full path. Provide as many file names as desired. For example:
tux > rpm -q -f /bin/rpm /usr/bin/wget
rpm-4.11.2-15.1.x86_64
wget-1.14-17.1.x86_64If only part of the file name is known, use a shell script as shown in Example 2.3, “Script to Search for Packages”. Pass the partial file name to the script shown as a parameter when running it.
#! /bin/sh
for i in $(rpm -q -a -l | grep $1); do
echo "\"$i\" is in package:"
rpm -q -f $i
echo ""
done
The command rpm -q --changelog
PACKAGE displays a detailed list of change
information about a specific package, sorted by date.
With the installed RPM database, verification checks can be made. Initiate
these with -V, or --verify. With this
option, rpm shows all files in a package that have been
changed since installation. rpm uses eight character
symbols to give some hints about the following changes:
|
|
MD5 check sum |
|
|
File size |
|
|
Symbolic link |
|
|
Modification time |
|
|
Major and minor device numbers |
|
|
Owner |
|
|
Group |
|
|
Mode (permissions and file type) |
In the case of configuration files, the letter c is
printed. For example, for changes to /etc/wgetrc
(wget package):
tux > rpm -V wget
S.5....T c /etc/wgetrc
The files of the RPM database are placed in
/var/lib/rpm. If the partition
/usr has a size of 1 GB, this database can occupy
nearly 30 MB, especially after a complete update. If the database is
much larger than expected, it is useful to rebuild the database with the
option --rebuilddb. Before doing this, make a backup of the
old database. The cron script
cron.daily makes daily copies of the database (packed
with gzip) and stores them in /var/adm/backup/rpmdb.
The number of copies is controlled by the variable
MAX_RPMDB_BACKUPS (default: 5) in
/etc/sysconfig/backup. The size of a single backup is
approximately 1 MB for 1 GB in /usr.
All source packages carry a .src.rpm extension (source
RPM).
Source packages can be copied from the installation medium to the hard disk
and unpacked with YaST. They are not, however, marked as installed
([i]) in the package manager. This is because the source
packages are not entered in the RPM database. Only
installed operating system software is listed in the
RPM database. When you “install” a source package, only the
source code is added to the system.
The following directories must be available for rpm and
rpmbuild in /usr/src/packages
(unless you specified custom settings in a file like
/etc/rpmrc):
SOURCES
for the original sources (.tar.bz2 or
.tar.gz files, etc.) and for distribution-specific
adjustments (mostly .diff or
.patch files)
SPECS
for the .spec files, similar to a meta Makefile,
which control the build process
BUILD
all the sources are unpacked, patched and compiled in this directory
RPMS
where the completed binary packages are stored
SRPMS
here are the source RPMs
When you install a source package with YaST, all the necessary components
are installed in /usr/src/packages: the sources and the
adjustments in SOURCES and the relevant
.spec file in SPECS.
Do not experiment with system components
(glibc,
rpm, etc.), because this
endangers the stability of your system.
The following example uses the wget.src.rpm package.
After installing the source package, you should have files similar to those
in the following list:
/usr/src/packages/SOURCES/wget-1.11.4.tar.bz2 /usr/src/packages/SOURCES/wgetrc.patch /usr/src/packages/SPECS/wget.spec
rpmbuild -bX
/usr/src/packages/SPECS/wget.spec starts the
compilation. X is a wild card for various stages
of the build process (see the output of --help or the RPM
documentation for details). The following is merely a brief explanation:
-bp
Prepare sources in /usr/src/packages/BUILD: unpack
and patch.
-bc
Do the same as -bp, but with additional compilation.
-bi
Do the same as -bp, but with additional installation of
the built software. Caution: if the package does not support the
BuildRoot feature, you might overwrite configuration files.
-bb
Do the same as -bi, but with the additional creation of
the binary package. If the compile was successful, the binary should be
in /usr/src/packages/RPMS.
-ba
Do the same as -bb, but with the additional creation of
the source RPM. If the compilation was successful, the binary should be
in /usr/src/packages/SRPMS.
--short-circuit
Skip some steps.
The binary RPM created can now be installed with rpm
-i or, preferably, with rpm
-U. Installation with rpm makes it
appear in the RPM database.
Keep in mind, the BuildRoot directive in the spec file is
deprecated since openSUSE Leap 42.1. If you still need this feature, use the
--buildroot option as a workaround.
The danger with many packages is that unwanted files are added to the
running system during the build process. To prevent this use
build, which creates a defined environment in which
the package is built. To establish this chroot environment, the
build script must be provided with a complete package
tree. This tree can be made available on the hard disk, via NFS, or from
DVD. Set the position with build --rpms
DIRECTORY. Unlike rpm, the
build command looks for the .spec
file in the source directory. To build wget (like in
the above example) with the DVD mounted in the system under
/media/dvd, use the following commands as
root:
root #cd /usr/src/packages/SOURCES/root #mv ../SPECS/wget.spec .root #build --rpms /media/dvd/suse/ wget.spec
Subsequently, a minimum environment is established at
/var/tmp/build-root. The package is built in this
environment. Upon completion, the resulting packages are located in
/var/tmp/build-root/usr/src/packages/RPMS.
The build script offers several additional options. For
example, cause the script to prefer your own RPMs, omit the initialization
of the build environment or limit the rpm command to one
of the above-mentioned stages. Access additional information with
build --help and by reading the
build man page.
Midnight Commander (mc) can display the contents of RPM
archives and copy parts of them. It represents archives as virtual file
systems, offering all usual menu options of Midnight Commander. Display the
HEADER with F3. View the archive
structure with the cursor keys and Enter. Copy archive
components with F5.
A full-featured package manager is available as a YaST module. For details, see Chapter 11, Installing or Removing Software.
Being able to do file system snapshots providing the ability to do
rollbacks on Linux is a feature that was often requested in the past.
Snapper, with the Btrfs file system or thin-provisioned
LVM volumes now fills that gap.
Btrfs, a new copy-on-write file system for Linux,
supports file system snapshots (a copy of the state of a subvolume at a
certain point of time) of subvolumes (one or more separately mountable file
systems within each physical partition). Snapshots are also supported on
thin-provisioned LVM volumes formatted with XFS, Ext4 or Ext3. Snapper lets
you create and manage these snapshots. It comes with a command line and a
YaST interface. Starting with openSUSE Leap it is also possible
to boot from Btrfs snapshots—see Section 3.3, “System Rollback by Booting from Snapshots” for more information.
Using Snapper you can perform the following tasks:
Undo system changes made by zypper and YaST. See
Section 3.2, “Using Snapper to Undo Changes” for details.
Restore files from previous snapshots. See Section 3.2.2, “Using Snapper to Restore Files” for details.
Do a system rollback by booting from a snapshot. See Section 3.3, “System Rollback by Booting from Snapshots” for details.
Manually create snapshots on the fly and manage existing snapshots. See Section 3.5, “Manually Creating and Managing Snapshots” for details.
Snapper on openSUSE Leap is set up to serve as an “undo and recovery
tool” for system changes. By default, the root partition
(/) of openSUSE Leap is formatted with
Btrfs. Taking snapshots is automatically enabled if the
root partition (/) is big enough (approximately more
than 16 GB). Taking snapshots on partitions other than
/ is not enabled by default.
If you disabled Snapper during the installation, you can enable it at any time later. To do so, create a default Snapper configuration for the root file system by running
tux >sudosnapper -c root create-config /
Afterward enable the different snapshot types as described in Section 3.1.3.1, “Disabling/Enabling Snapshots”.
Keep in mind that snapshots require a Btrfs root file system with subvolumes set up as proposed by the installer and a partition size of at least 16 GB.
When a snapshot is created, both the snapshot and the original point to the
same blocks in the file system. So, initially a snapshot does not occupy
additional disk space. If data in the original file system is modified,
changed data blocks are copied while the old data blocks are kept for the
snapshot. Therefore, a snapshot occupies the same amount of space as the
data modified. So, over time, the amount of space a snapshot allocates,
constantly grows. As a consequence, deleting files from a
Btrfs file system containing snapshots may
not free disk space!
Snapshots always reside on the same partition or subvolume on which the snapshot has been taken. It is not possible to store snapshots on a different partition or subvolume.
As a result, partitions containing snapshots need to be larger than “normal” partitions. The exact amount strongly depends on the number of snapshots you keep and the amount of data modifications. As a rule of thumb you should consider using twice the size than you normally would. To prevent disks from running out of space, old snapshots are automatically cleaned up. Refer to Section 3.1.3.4, “Controlling Snapshot Archiving” for details.
Although snapshots themselves do not differ in a technical sense, we distinguish between three types of snapshots, based on the events that trigger them:
A single snapshot is created every hour. Old snapshots are automatically deleted. By default, the first snapshot of the last ten days, months, and years are kept. Timeline snapshots are disabled by default.
Whenever one or more packages are installed with YaST or Zypper, a
pair of snapshots is created: one before the installation starts
(“Pre”) and another one after the installation has finished
(“Post”). In case an important system component such as the
kernel has been installed, the snapshot pair is marked as important
(important=yes). Old snapshots are automatically
deleted. By default the last ten important snapshots and the last ten
“regular” (including administration snapshots) snapshots
are kept. Installation snapshots are enabled by default.
Whenever you administrate the system with YaST, a pair of snapshots is created: one when a YaST module is started (“Pre”) and another when the module is closed (“Post”). Old snapshots are automatically deleted. By default the last ten important snapshots and the last ten “regular” snapshots (including installation snapshots) are kept. Administration snapshots are enabled by default.
Some directories need to be excluded from snapshots for different reasons. The following list shows all directories that are excluded:
/boot/grub2/i386-pc,
/boot/grub2/x86_64-efi,
/boot/grub2/powerpc-ieee1275,
/boot/grub2/s390x-emu
A rollback of the boot loader configuration is not supported. The directories listed above are architecture-specific. The first two directories are present on AMD64/Intel 64 machines, the latter two on IBM POWER and on IBM z Systems, respectively.
/home
If /home does not reside on a separate partition, it
is excluded to avoid data loss on rollbacks.
/opt, /var/opt
Third-party products usually get installed to /opt. It
is excluded to avoid uninstalling these applications on rollbacks.
/srv
Contains data for Web and FTP servers. It is excluded to avoid data loss on rollbacks.
/tmp, /var/tmp,
/var/cache, /var/crash
All directories containing temporary files and caches are excluded from snapshots.
/usr/local
This directory is used when manually installing software. It is excluded to avoid uninstalling these installations on rollbacks.
/var/lib/libvirt/images
The default location for virtual machine images managed with libvirt.
Excluded to ensure virtual machine images are not replaced with older
versions during a rollback. By default, this subvolume is created with the
option no copy on write.
/var/lib/mailman, /var/spool
Directories containing mails or mail queues are excluded to avoid a loss of mails after a rollback.
/var/lib/named
Contains zone data for the DNS server. Excluded from snapshots to ensure a name server can operate after a rollback.
/var/lib/mariadb,
/var/lib/mysql, /var/lib/pgqsl
These directories contain database data. By default, these subvolumes are
created with the option no copy on write.
/var/log
Log file location. Excluded from snapshots to allow log file analysis after the rollback of a broken system.
openSUSE Leap comes with a reasonable default setup, which should be sufficient for most use cases. However, all aspects of taking automatic snapshots and snapshot keeping can be configured according to your needs.
Each of the three snapshot types (timeline, installation, administration) can be enabled or disabled independently.
Enabling.
snapper -c root set-config "TIMELINE_CREATE=yes"
Disabling.
snapper -c root set-config "TIMELINE_CREATE=no"
Timeline snapshots are enabled by default, except for the root partition.
Enabling:
Install the package
snapper-zypp-plugin
Disabling:
Uninstall the package
snapper-zypp-plugin
Installation snapshots are enabled by default.
Enabling:
Set USE_SNAPPER to yes in
/etc/sysconfig/yast2.
Disabling:
Set USE_SNAPPER to no in
/etc/sysconfig/yast2.
Administration snapshots are enabled by default.
Taking snapshot pairs upon installing packages with YaST or Zypper is
handled by the
snapper-zypp-plugin. An XML
configuration file, /etc/snapper/zypp-plugin.conf
defines, when to make snapshots. By default the file looks like the
following:
1 <?xml version="1.0" encoding="utf-8"?> 2 <snapper-zypp-plugin-conf> 3 <solvables> 4 <solvable match="w"1 important="true"2>kernel-*3</solvable> 5 <solvable match="w" important="true">dracut</solvable> 6 <solvable match="w" important="true">glibc</solvable> 7 <solvable match="w" important="true">systemd*</solvable> 8 <solvable match="w" important="true">udev</solvable> 9 <solvable match="w">*</solvable>4 10 </solvables> 11 </snapper-zypp-plugin-conf>
The match attribute defines whether the pattern is a Unix shell-style
wild card ( | |
If the given pattern matches and the corresponding package is marked as important (for example kernel packages), the snapshot will also be marked as important. | |
Pattern to match a package name. Based on the setting of the
| |
This line unconditionally matches all packages. |
With this configuration snapshot, pairs are made whenever a package is installed (line 9). When the kernel, dracut, glibc, systemd, or udev packages marked as important are installed, the snapshot pair will also be marked as important (lines 4 to 8). All rules are evaluated.
To disable a rule, either delete it or deactivate it using XML comments. To prevent the system from making snapshot pairs for every package installation for example, comment line 9:
1 <?xml version="1.0" encoding="utf-8"?> 2 <snapper-zypp-plugin-conf> 3 <solvables> 4 <solvable match="w" important="true">kernel-*</solvable> 5 <solvable match="w" important="true">dracut</solvable> 6 <solvable match="w" important="true">glibc</solvable> 7 <solvable match="w" important="true">systemd*</solvable> 8 <solvable match="w" important="true">udev</solvable> 9 <!-- <solvable match="w">*</solvable> --> 10 </solvables> 11 </snapper-zypp-plugin-conf>
Creating a new subvolume underneath the / hierarchy
and permanently mounting it is supported. Such a subvolume will be
excluded from snapshots. You need to make sure not to create it inside an
existing snapshot, since you would not be able to delete snapshots anymore
after a rollback.
openSUSE Leap is configured with the /@/ subvolume
which serves as an independent root for permanent subvolumes such as
/opt, /srv,
/home and others. Any new subvolumes you create and
permanently mount need to be created in this initial root file system.
To do so, run the following commands. In this example, a new subvolume
/usr/important is created from
/dev/sda2.
tux >sudomount /dev/sda2 -o subvol=@ /mnttux >sudobtrfs subvolume create /mnt/usr/importanttux >sudoumount /mnt
The corresponding entry in /etc/fstab needs to look
like the following:
/dev/sda2 /usr/important btrfs subvol=@/usr/important 0 0
A subvolume may contain files that constantly change, such as
virtualized disk images, database files, or log files. If so, consider
disabling the copy-on-write feature for this volume, to avoid duplication
of disk blocks. Use the nodatacow mount option in
/etc/fstab to do so:
/dev/sda2 /usr/important btrfs nodatacow,subvol=@/usr/important 0 0
To alternatively disable copy-on-write for single files or directories,
use the command chattr +C
PATH.
Snapshots occupy disk space. To prevent disks from running out of space and thus causing system outages, old snapshots are automatically deleted. By default, up to ten important installation and administration snapshots and up to ten regular installation and administration snapshots are kept. If these snapshots occupy more than 50% of the root file system size, additional snapshots will be deleted. A minimum of four important and two regular snapshots are always kept.
Refer to Section 3.4.1, “Managing Existing Configurations” for instructions on how to change these values.
Apart from snapshots on Btrfs file systems, Snapper
also supports taking snapshots on thin-provisioned LVM volumes (snapshots
on regular LVM volumes are not supported) formatted
with XFS, Ext4 or Ext3. For more information and setup instructions on LVM
volumes, refer to Section 5.2, “LVM Configuration”.
To use Snapper on a thin-provisioned LVM volume you need to create a
Snapper configuration for it. On LVM it is required to specify the file
system with
--fstype=lvm(FILESYSTEM).
ext3, etx4 or xfs
are valid values for FILESYSTEM. Example:
tux >sudosnapper -c lvm create-config --fstype="lvm(xfs)" /thin_lvm
You can adjust this configuration according to your needs as described in Section 3.4.1, “Managing Existing Configurations”.
Snapper on openSUSE Leap is preconfigured to serve as a tool that lets you
undo changes made by zypper and YaST. For this purpose,
Snapper is configured to create a pair of snapshots before and after each
run of zypper and YaST. Snapper also lets you restore
system files that have been accidentally deleted or modified. Timeline
snapshots for the root partition need to be enabled for this
purpose—see
Section 3.1.3.1, “Disabling/Enabling Snapshots” for details.
By default, automatic snapshots as described above are configured for the
root partition and its subvolumes. To make snapshots available for other
partitions such as /home for example, you can create
custom configurations.
When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:
When undoing changes as described in the following, two snapshots are being compared and the changes between these two snapshots are made undone. Using this method also allows to explicitly select the files that should be restored.
When doing rollbacks as described in Section 3.3, “System Rollback by Booting from Snapshots”, the system is reset to the state at which the snapshot was taken.
When undoing changes, it is also possible to compare a snapshot against the current system. When restoring all files from such a comparison, this will have the same result as doing a rollback. However, using the method described in Section 3.3, “System Rollback by Booting from Snapshots” for rollbacks should be preferred, since it is faster and allows you to review the system before doing the rollback.
There is no mechanism to ensure data consistency when creating a snapshot.
Whenever a file (for example, a database) is written at the same time as
the snapshot is being created, it will result in a corrupted or partly written
file. Restoring such a file will cause problems. Furthermore, some system
files such as /etc/mtab must never be restored.
Therefore it is strongly recommended to always closely
review the list of changed files and their diffs. Only restore files that
really belong to the action you want to revert.
If you set up the root partition with Btrfs during the
installation, Snapper—preconfigured for doing rollbacks of YaST or
Zypper changes—will automatically be installed. Every time you start
a YaST module or a Zypper transaction, two snapshots are created: a
“pre-snapshot” capturing the state of the file system before
the start of the module and a “post-snapshot” after the module
has been finished.
Using the YaST Snapper module or the snapper command
line tool, you can undo the changes made by YaST/Zypper by restoring
files from the “pre-snapshot”. Comparing two snapshots the
tools also allow you to see which files have been changed. You can also
display the differences between two versions of a file (diff).
Start the module from the
section in YaST or by entering
yast2 snapper.
Make sure is set to . This is always the case unless you have manually added own Snapper configurations.
Choose a pair of pre- and post-snapshots from the list. Both, YaST and
Zypper snapshot pairs are of the type .
YaST snapshots are labeled as zypp(y2base) in the
; Zypper snapshots are labeled
zypp(zypper).
Click to open the list of files that differ between the two snapshots.
Review the list of files. To display a “diff” between the pre- and post-version of a file, select it from the list.
To restore one or more files, select the relevant files or directories by activating the respective check box. Click and confirm the action by clicking .
To restore a single file, activate its diff view by clicking its name. Click and confirm your choice with .
snapper Command #
Get a list of YaST and Zypper snapshots by running snapper
list -t pre-post. YaST snapshots are labeled
as yast MODULE_NAME in the
; Zypper snapshots are labeled
zypp(zypper).
tux >sudosnapper list -t pre-post Pre # | Post # | Pre Date | Post Date | Description ------+--------+-------------------------------+-------------------------------+-------------- 311 | 312 | Tue 06 May 2014 14:05:46 CEST | Tue 06 May 2014 14:05:52 CEST | zypp(y2base) 340 | 341 | Wed 07 May 2014 16:15:10 CEST | Wed 07 May 2014 16:15:16 CEST | zypp(zypper) 342 | 343 | Wed 07 May 2014 16:20:38 CEST | Wed 07 May 2014 16:20:42 CEST | zypp(y2base) 344 | 345 | Wed 07 May 2014 16:21:23 CEST | Wed 07 May 2014 16:21:24 CEST | zypp(zypper) 346 | 347 | Wed 07 May 2014 16:41:06 CEST | Wed 07 May 2014 16:41:10 CEST | zypp(y2base) 348 | 349 | Wed 07 May 2014 16:44:50 CEST | Wed 07 May 2014 16:44:53 CEST | zypp(y2base) 350 | 351 | Wed 07 May 2014 16:46:27 CEST | Wed 07 May 2014 16:46:38 CEST | zypp(y2base)
Get a list of changed files for a snapshot pair with snapper
status
PRE..POST. Files
with content changes are marked with , files that
have been added are marked with and deleted files
are marked with .
tux >sudosnapper status 350..351 +..... /usr/share/doc/packages/mikachan-fonts +..... /usr/share/doc/packages/mikachan-fonts/COPYING +..... /usr/share/doc/packages/mikachan-fonts/dl.html c..... /usr/share/fonts/truetype/fonts.dir c..... /usr/share/fonts/truetype/fonts.scale +..... /usr/share/fonts/truetype/みかちゃん-p.ttf +..... /usr/share/fonts/truetype/みかちゃん-pb.ttf +..... /usr/share/fonts/truetype/みかちゃん-ps.ttf +..... /usr/share/fonts/truetype/みかちゃん.ttf c..... /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4 c..... /var/lib/rpm/Basenames c..... /var/lib/rpm/Dirnames c..... /var/lib/rpm/Group c..... /var/lib/rpm/Installtid c..... /var/lib/rpm/Name c..... /var/lib/rpm/Packages c..... /var/lib/rpm/Providename c..... /var/lib/rpm/Requirename c..... /var/lib/rpm/Sha1header c..... /var/lib/rpm/Sigmd5
To display the diff for a certain file, run snapper
diff
PRE..POST
FILENAME. If you do not specify
FILENAME, a diff for all files will be
displayed.
tux >sudosnapper diff 350..351 /usr/share/fonts/truetype/fonts.scale --- /.snapshots/350/snapshot/usr/share/fonts/truetype/fonts.scale 2014-04-23 15:58:57.000000000 +0200 +++ /.snapshots/351/snapshot/usr/share/fonts/truetype/fonts.scale 2014-05-07 16:46:31.000000000 +0200 @@ -1,4 +1,4 @@ -1174 +1486 ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso10646-1 ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso8859-1 [...]
To restore one or more files run snapper -v undochange
PRE..POST
FILENAMES. If you do not specify a
FILENAMES, all changed files will be restored.
tux >sudosnapper -v undochange 350..351 create:0 modify:13 delete:7 undoing change... deleting /usr/share/doc/packages/mikachan-fonts deleting /usr/share/doc/packages/mikachan-fonts/COPYING deleting /usr/share/doc/packages/mikachan-fonts/dl.html deleting /usr/share/fonts/truetype/みかちゃん-p.ttf deleting /usr/share/fonts/truetype/みかちゃん-pb.ttf deleting /usr/share/fonts/truetype/みかちゃん-ps.ttf deleting /usr/share/fonts/truetype/みかちゃん.ttf modifying /usr/share/fonts/truetype/fonts.dir modifying /usr/share/fonts/truetype/fonts.scale modifying /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4 modifying /var/lib/rpm/Basenames modifying /var/lib/rpm/Dirnames modifying /var/lib/rpm/Group modifying /var/lib/rpm/Installtid modifying /var/lib/rpm/Name modifying /var/lib/rpm/Packages modifying /var/lib/rpm/Providename modifying /var/lib/rpm/Requirename modifying /var/lib/rpm/Sha1header modifying /var/lib/rpm/Sigmd5 undoing change done
Reverting user additions via undoing changes with Snapper is not recommended. Since certain directories are excluded from snapshots, files belonging to these users will remain in the file system. If a user with the same user ID as a deleted user is created, this user will inherit the files. Therefore it is strongly recommended to use the YaST tool to remove users.
Apart from the installation and administration snapshots, Snapper creates timeline snapshots. You can use these backup snapshots to restore files that have accidentally been deleted or to restore a previous version of a file. By using Snapper's diff feature you can also find out which modifications have been made at a certain point of time.
Being able to restore files is especially interesting for data, which may
reside on subvolumes or partitions for which snapshots are not taken by
default. To be able to restore files from home directories, for example,
create a separate Snapper configuration for /home
doing automatic timeline snapshots. See
Section 3.4, “Creating and Modifying Snapper Configurations” for instructions.
Snapshots taken from the root file system (defined by Snapper's root configuration), can be used to do a system rollback. The recommended way to do such a rollback is to boot from the snapshot and then perform the rollback. See Section 3.3, “System Rollback by Booting from Snapshots” for details.
Performing a rollback would also be possible by restoring all files from a
root file system snapshot as described below. However, this is not
recommended. You may restore single files, for example a configuration
file from the /etc directory, but not the
complete list of files from the snapshot.
This restriction only affects snapshots taken from the root file system!
Start the module from the
section in YaST or by entering
yast2 snapper.
Choose the from which to choose a snapshot.
Select a timeline snapshot from which to restore a file and choose . Timeline snapshots are of the type with a description value of .
Select a file from the text box by clicking the file name. The difference between the snapshot version and the current system is shown. Activate the check box to select the file for restore. Do so for all files you want to restore.
Click and confirm the action by clicking .
snapper Command #Get a list of timeline snapshots for a specific configuration by running the following command:
tux >sudosnapper -c CONFIG list -t single | grep timeline
CONFIG needs to be replaced by an existing
Snapper configuration. Use snapper list-configs to
display a list.
Get a list of changed files for a given snapshot by running the following command:
tux >sudosnapper -c CONFIG status SNAPSHOT_ID..0
Replace SNAPSHOT_ID by the ID for the snapshot from which you want to restore the file(s).
Optionally list the differences between the current file version and the one from the snapshot by running
tux >sudosnapper -c CONFIG diff SNAPSHOT_ID..0 FILE NAME
If you do not specify <FILE NAME>, the difference for all files are shown.
To restore one or more files, run
tux >sudosnapper -c CONFIG -v undochange SNAPSHOT_ID..0 FILENAME1 FILENAME2
If you do not specify file names, all changed files will be restored.
The GRUB 2 version included on openSUSE Leap can boot from Btrfs snapshots.
Together with Snapper's rollback feature, this allows to recover a
misconfigured system. Only snapshots created for the default Snapper
configuration (root) are bootable.
As of openSUSE Leap 42.3 system rollbacks are only supported if the default subvolume configuration of the root partition has not been changed.
When booting a snapshot, the parts of the file system included in the snapshot are mounted read-only; all other file systems and parts that are excluded from snapshots are mounted read-write and can be modified.
When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:
When undoing changes as described in Section 3.2, “Using Snapper to Undo Changes”, two snapshots are compared and the changes between these two snapshots are reverted. Using this method also allows to explicitly exclude selected files from being restored.
When doing rollbacks as described in the following, the system is reset to the state at which the snapshot was taken.
To do a rollback from a bootable snapshot, the following requirements must be met. When doing a default installation, the system is set up accordingly.
The root file system needs to be Btrfs. Booting from LVM volume snapshots is not supported.
The root file system needs to be on a single device, a single partition
and a single subvolume. Directories that are excluded from snapshots such
as /srv (see Section 3.1.2, “Directories That Are Excluded from Snapshots”
for a full list) may reside on separate partitions.
The system needs to be bootable via the installed boot loader.
To perform a rollback from a bootable snapshot, do as follows:
Boot the system. In the boot menu choose and select the snapshot you want to boot. The list of snapshots is listed by date—the most recent snapshot is listed first.
Log in to the system. Carefully check whether everything works as expected. Note that you cannot write to any directory that is part of the snapshot. Data you write to other directories will not get lost, regardless of what you do next.
Depending on whether you want to perform the rollback or not, choose your next step:
If the system is in a state where you do not want to do a rollback, reboot to boot into the current system state. You can then choose a different snapshot, or start the rescue system.
To perform the rollback, run
tux >sudosnapper rollback
and reboot afterward. On the boot screen, choose the default boot entry to reboot into the reinstated system. A snapshot of the file system status before the rollback is created. The default subvolume for root will be replaced with a fresh read-write snapshot. For details, see Section 3.3.1, “Snapshots after Rollback”.
It is useful to add a description for the snapshot with the -d option.
For example:
New file system root since rollback on DATE TIME
If snapshots are not disabled during installation, an initial bootable
snapshot is created at the end of the initial system installation. You can
go back to that state at any time by booting this snapshot. The snapshot
can be identified by the description after installation.
A bootable snapshot is also created when starting a system upgrade to a service pack or a new major release (provided snapshots are not disabled).
Before a rollback is performed, a snapshot of the running file system is created. The description references the ID of the snapshot that was restored in the rollback.
Snapshots created by rollbacks receive the value number
for the Cleanup attribute. The rollback snapshots are
therefore automatically deleted when the set number of snapshots is reached.
Refer to Section 3.6, “Automatic Snapshot Clean-Up” for details.
If the snapshot contains important data, extract the data from the snapshot
before it is removed.
For example, after a fresh installation the following snapshots are available on the system:
root #snapper--iso list Type | # | | Cleanup | Description | Userdata -------+---+ ... +---------+-----------------------+-------------- single | 0 | | | current | single | 1 | | | first root filesystem | single | 2 | | number | after installation | important=yes
After running sudo snapper rollback snapshot
3 is created and contains the state of the system
before the rollback was executed. Snapshot 4 is
the new default Btrfs subvolume and thus the system after a reboot.
root #snapper--iso list Type | # | | Cleanup | Description | Userdata -------+---+ ... +---------+-----------------------+-------------- single | 0 | | | current | single | 1 | | number | first root filesystem | single | 2 | | number | after installation | important=yes single | 3 | | number | rollback backup of #1 | important=yes single | 4 | | | |
To boot from a snapshot, reboot your machine and choose . A screen listing all bootable snapshots opens. The most recent snapshot is listed first, the oldest last. Use the keys ↓ and ↑ to navigate and press Enter to activate the selected snapshot. Activating a snapshot from the boot menu does not reboot the machine immediately, but rather opens the boot loader of the selected snapshot.
Each snapshot entry in the boot loader follows a naming scheme which makes it possible to identify it easily:
[*]1OS2 (KERNEL3,DATE4TTIME5,DESCRIPTION6)
If the snapshot was marked | |
Operating system label. | |
Date in the format | |
Time in the format | |
This field contains a description of the snapshot. In case of a manually
created snapshot this is the string created with the option
|
It is possible to replace the default string in the description field of a snapshot with a custom string. This is for example useful if an automatically created description is not sufficient, or a user-provided description is too long. To set a custom string STRING for snapshot NUMBER, use the following command:
tux >sudosnapper modify --userdata "bootloader=STRING" NUMBER
The description should be no longer than 25 characters—everything that exceeds this size will not be readable on the boot screen.
A complete system rollback, restoring the complete system to the identical state as it was in when a snapshot was taken, is not possible.
Root file system snapshots do not contain all directories. See Section 3.1.2, “Directories That Are Excluded from Snapshots” for details and reasons. As a general consequence, data from these directories is not restored, resulting in the following limitations.
Applications and add-ons installing data in subvolumes excluded from
the snapshot, such as /opt, may not work after a
rollback, if others parts of the application data are also installed on
subvolumes included in the snapshot. Re-install the application or the
add-on to solve this problem.
If an application had changed file permissions and/or ownership in between snapshot and current system, the application may not be able to access these files. Reset permissions and/or ownership for the affected files after the rollback.
If a service or an application has established a new data format in between snapshot and current system, the application may not be able to read the affected data files after a rollback.
Subvolumes like /srv may contain a mixture of code
and data. A rollback may result in non-functional code. A downgrade of
the PHP version, for example, may result in broken PHP scripts for the
Web server.
If a rollback removes users from the system, data that is owned by
these users in directories excluded from the snapshot, is not removed.
If a user with the same user ID is created, this user will inherit the
files. Use a tool like find to locate and remove
orphaned files.
A rollback of the boot loader is not possible, since all
“stages” of the boot loader must fit together. This cannot be
guaranteed when doing rollbacks of /boot.
The way Snapper behaves is defined in a configuration file that is specific
for each partition or Btrfs subvolume. These
configuration files reside under /etc/snapper/configs/.
In case the root file system is big enough (approximately 12 GB), snapshots
are automatically enabled for the root file system /
upon installation. The corresponding default configuration is named
root. It creates and manages the YaST and Zypper
snapshot. See Section 3.4.1.1, “Configuration Data” for a list
of the default values.
As explained in Section 3.1, “Default Setup”, enabling snapshots requires additional free space in the root file system. The amount depends on the amount of packages installed and the amount of changes made to the volume that is included in snapshots. The snapshot frequency and the number of snapshots that get archived also matter.
There is a minimum root file system size that is required in order to
automatically enable snapshots during the installation. As of openSUSE Leap
12 SP3 this size is approximately 12 GB. This value may change in the
future, depending on architecture and the size of the base system. It
depends on the values for the following tags in the file
/control.xml from the installation media:
<root_base_size> <btrfs_increase_percentage>
It is calculated with the following formula: ROOT_BASE_SIZE * (1 + BTRFS_INCREASE_PERCENTAGE/100)
Keep in mind that this value is a minimum size. Consider using more space for the root file system. As a rule of thumb, double the size you would use when not having enabled snapshots.
You may create your own configurations for other partitions formatted with
Btrfs or existing subvolumes on a
Btrfs partition. In the following example we will set up
a Snapper configuration for backing up the Web server data residing on a
separate, Btrfs-formatted partition mounted at
/srv/www.
After a configuration has been created, you can either use
snapper itself or the YaST
module to restore files from these snapshots. In YaST you need to select
your , while you need to specify
your configuration for snapper with the global switch
-c (for example, snapper -c myconfig
list).
To create a new Snapper configuration, run snapper
create-config:
tux >sudosnapper -c www-data1 create-config /srv/www2
Name of configuration file. | |
Mount point of the partition or |
This command will create a new configuration file
/etc/snapper/configs/www-data with reasonable default
values (taken from
/etc/snapper/config-templates/default). Refer to
Section 3.4.1, “Managing Existing Configurations” for instructions on how to
adjust these defaults.
Default values for a new configuration are taken from
/etc/snapper/config-templates/default. To use your own
set of defaults, create a copy of this file in the same directory and
adjust it to your needs. To use it, specify the -t option
with the create-config command:
tux >sudosnapper -c www-data create-config -t MY_DEFAULTS /srv/www
The snapper offers several subcommands for managing
existing configurations. You can list, show, delete and modify them:
Use the command snapper list-configs to get all
existing configurations:
tux >sudosnapper list-configs Config | Subvolume -------+---------- root | / usr | /usr local | /local
Use the subcommand snapper -c
CONFIG get-config to display the
specified configuration. Config needs to be
replaced by a configuration name shown by snapper
list-configs. See
Section 3.4.1.1, “Configuration Data” for more information
on the configuration options.
To display the default configuration run
tux >sudosnapper -c root get-config
Use the subcommand snapper -c CONFIG
set-config
OPTION=VALUE
to modify an option in the specified configuration.
Config needs to be replaced by a
configuration name shown by snapper list-configs.
Possible values for OPTION and
VALUE are listed in Section 3.4.1.1, “Configuration Data”.
Use the subcommand snapper -c
CONFIG delete-config to delete a
configuration. Config needs to be replaced by
a configuration name shown by snapper list-configs.
Each configuration contains a list of options that can be modified from
the command line. The following list provides details for each option. To
change a value, run snapper -c CONFIG
set-config
"KEY=VALUE".
ALLOW_GROUPS,
ALLOW_USERS
Granting permissions to use snapshots to regular users. See Section 3.4.1.2, “Using Snapper as Regular User” for more information.
The default value is "".
BACKGROUND_COMPARISON
Defines whether pre and post snapshots should be compared in the background after creation.
The default value is "yes".
EMPTY_*
Defines the clean-up algorithm for snapshots pairs with identical pre and post snapshots. See Section 3.6.3, “Cleaning Up Snapshot Pairs That Do Not Differ” for details.
FSTYPE
File system type of the partition. Do not change.
The default value is "btrfs".
Defines the clean-up algorithm for installation and admin snapshots. See Section 3.6.1, “Cleaning Up Numbered Snapshots” for details.
QGROUP / SPACE_LIMIT
Adds quota support to the clean-up algorithms. See Section 3.6.5, “Adding Disk Quota Support” for details.
SUBVOLUME
Mount point of the partition or subvolume to snapshot. Do not change.
The default value is "/".
SYNC_ACL
If Snapper is used by regular users (see
Section 3.4.1.2, “Using Snapper as Regular User”), the users must be able to
access the .snapshot directories and to read files
within them. If SYNC_ACL is set to yes, Snapper
automatically makes them accessible using ACLs for users and groups
from the ALLOW_USERS or ALLOW_GROUPS entries.
The default value is "no".
TIMELINE_CREATE
If set to yes, hourly snapshots are created. Valid
values: yes, no.
The default value is "no".
TIMELINE_CLEANUP /
TIMELINE_LIMIT_*
Defines the clean-up algorithm for timeline snapshots. See Section 3.6.2, “Cleaning Up Timeline Snapshots” for details.
By default Snapper can only be used by root. However, there are
cases in which certain groups or users need to be able to create snapshots
or undo changes by reverting to a snapshot:
Web site administrators who want to take snapshots of
/srv/www
Users who want to take a snapshot of their home directory
For these purposes Snapper configurations that grant permissions to users
or/and groups can be created. The corresponding
.snapshots directory needs to be readable and
accessible by the specified users. The easiest way to achieve this is to
set the SYNC_ACL option to yes.
Note that all steps in this procedure need to be run by root.
If not existing, create a Snapper configuration for the partition or subvolume on which the user should be able to use Snapper. Refer to Section 3.4, “Creating and Modifying Snapper Configurations” for instructions. Example:
tux >sudosnapper --config web_data create /srv/www
The configuration file is created under
/etc/snapper/configs/CONFIG,
where CONFIG is the value you specified with
-c/--config in the previous step (for example
/etc/snapper/configs/web_data). Adjust it according
to your needs; see Section 3.4.1, “Managing Existing Configurations” for
details.
Set values for ALLOW_USERS and/or
ALLOW_GROUPS to grant permissions to users and/or groups,
respectively. Multiple entries need to be separated by
Space. To grant permissions to the user
www_admin for example, run:
tux >sudosnapper -c web_data set-config "ALLOW_USERS=www_admin" SYNC_ACL="yes"
The given Snapper configuration can now be used by the specified user(s)
and/or group(s). You can test it with the list
command, for example:
www_admin:~ > snapper -c web_data list
Snapper is not restricted to creating and managing snapshots automatically by configuration; you can also create snapshot pairs (“before and after”) or single snapshots manually using either the command-line tool or the YaST module.
All Snapper operations are carried out for an existing configuration (see
Section 3.4, “Creating and Modifying Snapper Configurations” for details). You can only take
snapshots of partitions or volumes for which a configuration exists. By
default the system configuration (root) is used. If you
want to create or manage snapshots for your own configuration you need to
explicitly choose it. Use the
drop-down box in YaST or specify the -c on the command
line (snapper -c MYCONFIG
COMMAND).
Each snapshot consists of the snapshot itself and some metadata. When
creating a snapshot you also need to specify the metadata. Modifying a
snapshot means changing its metadata—you cannot modify its content.
Use snapper list to show existing snapshots and their
metadata:
snapper --config home list
Lists snapshots for the configuration home. To list
snapshots for the default configuration (root), use snapper -c
root list or snapper list.
snapper list -a
Lists snapshots for all existing configurations.
snapper list -t pre-post
Lists all pre and post snapshot pairs for the default
(root) configuration.
snapper list -t single
Lists all snapshots of the type single for the
default (root) configuration.
The following metadata is available for each snapshot:
Type: Snapshot type, see Section 3.5.1.1, “Snapshot Types” for details. This data cannot be changed.
Number: Unique number of the snapshot. This data cannot be changed.
Pre Number: Specifies the number of the corresponding pre snapshot. For snapshots of type post only. This data cannot be changed.
Description: A description of the snapshot.
Userdata: An extended description where
you can specify custom data in the form of a comma-separated key=value
list: reason=testing, project=foo. This field is also
used to mark a snapshot as important (important=yes)
and to list the user that created the snapshot
(user=tux).
Cleanup-Algorithm: Cleanup-algorithm for the snapshot, see Section 3.6, “Automatic Snapshot Clean-Up” for details.
Snapper knows three different types of snapshots: pre, post, and single. Physically they do not differ, but Snapper handles them differently.
pre
Snapshot of a file system before a modification.
Each pre snapshot has got a corresponding
post snapshot. Used for the automatic YaST/Zypper
snapshots, for example.
post
Snapshot of a file system after a modification.
Each post snapshot has got a corresponding
pre snapshot. Used for the automatic YaST/Zypper
snapshots, for example.
single
Stand-alone snapshot. Used for the automatic hourly snapshots, for example. This is the default type when creating snapshots.
Snapper provides three algorithms to clean up old snapshots. The algorithms are executed in a daily cron job. It is possible to define the number of different types of snapshots to keep in the Snapper configuration (see Section 3.4.1, “Managing Existing Configurations” for details).
Deletes old snapshots when a certain snapshot count is reached.
Deletes old snapshots having passed a certain age, but keeps several hourly, daily, monthly, and yearly snapshots.
Deletes pre/post snapshot pairs with empty diffs.
Creating a snapshot is done by running snapper create or
by clicking in the YaST module
. The following examples explain how to create
snapshots from the command line. It should be easy to adopt them when using
the YaST interface.
You should always specify a meaningful description to later be able to identify its purpose. Even more information can be specified via the user data option.
snapper create --description "Snapshot for week 2
2014"
Creates a stand-alone snapshot (type single) for the default
(root) configuration with a description. Because no
cleanup-algorithm is specified, the snapshot will never be deleted
automatically.
snapper --config home create --description "Cleanup in
~tux"
Creates a stand-alone snapshot (type single) for a custom configuration
named home with a description. Because no
cleanup-algorithm is specified, the snapshot will never be deleted
automatically.
snapper --config home create --description "Daily data
backup" --cleanup-algorithm timeline>
Creates a stand-alone snapshot (type single) for a custom configuration
named home with a description. The file will
automatically be deleted when it meets the criteria specified for the
timeline cleanup-algorithm in the configuration.
snapper create --type pre --print-number --description
"Before the Apache config cleanup" --userdata "important=yes"
Creates a snapshot of the type pre and prints the
snapshot number. First command needed to create a pair of snapshots used
to save a “before” and “after” state. The
snapshot is marked as important.
snapper create --type post --pre-number 30 --description
"After the Apache config cleanup" --userdata "important=yes"
Creates a snapshot of the type post paired with the
pre snapshot number 30. Second
command needed to create a pair of snapshots used to save a
“before” and “after” state. The snapshot is
marked as important.
snapper create --command COMMAND
--description "Before and after COMMAND"
Automatically creates a snapshot pair before and after running COMMAND. This option is only available when using snapper on the command line.
Snapper allows you to modify the description, the cleanup algorithm, and the user data of a snapshot. All other metadata cannot be changed. The following examples explain how to modify snapshots from the command line. It should be easy to adopt them when using the YaST interface.
To modify a snapshot on the command line, you need to know its number. Use
snapper list to display all snapshots
and their numbers.
The YaST module already lists all snapshots. Choose one from the list and click .
snapper modify --cleanup-algorithm "timeline"
10
Modifies the metadata of snapshot 10 for the default
(root) configuration. The cleanup algorithm is set to
timeline.
snapper --config home modify --description "daily backup"
-cleanup-algorithm "timeline" 120
Modifies the metadata of snapshot 120 for a custom configuration named
home. A new description is set and the cleanup
algorithm is unset.
To delete a snapshot with the YaST module, choose a snapshot from the list and click .
To delete a snapshot with the command line tool, you need to know its
number. Get it by running snapper list. To delete a
snapshot, run snapper delete
NUMBER.
Deleting the current default subvolume snapshot is not allowed.
When deleting snapshots with Snapper, the freed space will be claimed by a
Btrfs process running in the background. Thus the visibility and the
availability of free space is delayed. In case you need space freed by
deleting a snapshot to be available immediately, use the option
--sync with the delete command.
When deleting a pre snapshot, you should always delete
its corresponding post snapshot (and vice versa).
snapper delete 65
Deletes snapshot 65 for the default (root)
configuration.
snapper -c home delete 89 90
Deletes snapshots 89 and 90 for a custom configuration named
home.
snapper delete --sync 23
Deletes snapshot 23 for the default (root)
configuration and makes the freed space available immediately.
Sometimes the Btrfs snapshot is present but the XML file containing the metadata for Snapper is missing. In this case the snapshot is not visible for Snapper and needs to be deleted manually:
btrfs subvolume delete /.snapshots/SNAPSHOTNUMBER/snapshot rm -rf /.snapshots/SNAPSHOTNUMBER
If you delete snapshots to free space on your hard disk, make sure to delete old snapshots first. The older a snapshot is, the more disk space it occupies.
Snapshots are also automatically deleted by a daily cron job. Refer to Section 3.5.1.2, “Cleanup-algorithms” for details.
Snapshots occupy disk space and over time the amount of disk space occupied by the snapshots may become large. To prevent disks from running out of space, Snapper offers algorithms to automatically delete old snapshots. These algorithms differentiate between timeline snapshots and numbered snapshots (administration plus installation snapshot pairs). You can specify the number of snapshots to keep for each type.
In addition to that, you can optionally specify a disk space quota, defining the maximum amount of disk space the snapshots may occupy. It is also possible to automatically delete pre and post snapshots pairs that do not differ.
A clean-up algorithm is always bound to a single Snapper configuration, so you need to configure algorithms for each configuration. To prevent certain snapshots from being automatically deleted, refer to How to make a snapshot permanent? .
The default setup (root) is configured to do clean-up
for numbered snapshots and empty pre and post snapshot pairs. Quota support
is enabled—snapshots may not occupy more than 50% of the available
disk space of the root partition. Timeline snapshots are disabled by
default, therefore the timeline clean-up algorithm is also disabled.
Cleaning up numbered snapshots—administration plus installation snapshot pairs—is controlled by the following parameters of a Snapper configuration.
NUMBER_CLEANUP
Enables or disables clean-up of installation and admin snapshot pairs.
If enabled, snapshot pairs are deleted when the total snapshot count
exceeds a number specified with NUMBER_LIMIT and/or
NUMBER_LIMIT_IMPORTANT and an
age specified with NUMBER_MIN_AGE. Valid values:
yes (enable), no (disable).
The default value is "yes".
Example command to change or set:
tux >sudosnapper -c CONFIG set-config "NUMBER_CLEANUP=no"
NUMBER_LIMIT /
NUMBER_LIMIT_IMPORTANT
Defines how many regular and/or important installation and
administration snapshot pairs to keep. Only the youngest snapshots will
be kept. Ignored if NUMBER_CLEANUP is set to
"no".
The default value is "2-10" for
NUMBER_LIMIT and "4-10" for
NUMBER_LIMIT_IMPORTANT.
Example command to change or set:
tux >sudosnapper -c CONFIG set-config "NUMBER_LIMIT=10"
In case quota support is enabled (see
Section 3.6.5, “Adding Disk Quota Support”) the limit needs
to be specified as a minimum-maximum range, for example
2-10. If quota support is disabled, a constant
value, for example 10, needs to be provided,
otherwise cleaning-up will fail with an error.
NUMBER_MIN_AGE
Defines the minimum age in seconds a snapshot must have before it can automatically be deleted. Snapshots younger than the value specified here will not be deleted, regardless of how many exist.
The default value is "1800".
Example command to change or set:
tux >sudosnapper -c CONFIG set-config "NUMBER_MIN_AGE=864000"
NUMBER_LIMIT, NUMBER_LIMIT_IMPORTANT
and NUMBER_MIN_AGE are always evaluated. Snapshots are
only deleted when all conditions are met.
If you always want to keep the number of snapshots defined with
NUMBER_LIMIT* regardless of their age, set
NUMBER_MIN_AGE to 0.
The following example shows a configuration to keep the last 10 important and regular snapshots regardless of age:
NUMBER_CLEANUP=yes NUMBER_LIMIT_IMPORTANT=10 NUMBER_LIMIT=10 NUMBER_MIN_AGE=0
On the other hand, if you do not want to keep snapshots beyond a certain
age, set NUMBER_LIMIT* to 0 and
provide the age with NUMBER_MIN_AGE.
The following example shows a configuration to only keep snapshots younger than ten days:
NUMBER_CLEANUP=yes NUMBER_LIMIT_IMPORTANT=0 NUMBER_LIMIT=0 NUMBER_MIN_AGE=864000
Cleaning up timeline snapshots is controlled by the following parameters of a Snapper configuration.
TIMELINE_CLEANUP
Enables or disables clean-up of timeline snapshots. If enabled,
snapshots are deleted when the total snapshot count exceeds a number
specified with TIMELINE_LIMIT_*
and an age specified with
TIMELINE_MIN_AGE. Valid values:
yes, no.
The default value is "yes".
Example command to change or set:
tux >sudosnapper -c CONFIG set-config "TIMELINE_CLEANUP=yes"
TIMELINE_LIMIT_DAILY,
TIMELINE_LIMIT_HOURLY,
TIMELINE_LIMIT_MONTHLY,
TIMELINE_LIMIT_WEEKLY,
TIMELINE_LIMIT_YEARLY
Number of snapshots to keep for hour, day, month, week, and year.
The default value for each entry is "10", except for
TIMELINE_LIMIT_WEEKLY, which is set to
"0" by default.
TIMELINE_MIN_AGE
Defines the minimum age in seconds a snapshot must have before it can automatically be deleted.
The default value is "1800".
TIMELINE_CLEANUP="yes" TIMELINE_CREATE="yes" TIMELINE_LIMIT_DAILY="7" TIMELINE_LIMIT_HOURLY="24" TIMELINE_LIMIT_MONTHLY="12" TIMELINE_LIMIT_WEEKLY="4" TIMELINE_LIMIT_YEARLY="2" TIMELINE_MIN_AGE="1800"
This example configuration enables hourly snapshots which are
automatically cleaned up. TIMELINE_MIN_AGE and
TIMELINE_LIMIT_* are always both evaluated. In this
example, the minimum age of a snapshot before it can be deleted is set to
30 minutes (1800 seconds). Since we create hourly snapshots, this ensures
that only the latest snapshots are kept. If
TIMELINE_LIMIT_DAILY is set to not zero, this means
that the first snapshot of the day is kept, too.
Hourly: The last 24 snapshots that have been made.
Daily: The first daily snapshot that has been made is kept from the last seven days.
Monthly: The first snapshot made on the last day of the month is kept for the last twelve months.
Weekly: The first snapshot made on the last day of the week is kept from the last four weeks.
Yearly: The first snapshot made on the last day of the year is kept for the last two years.
As explained in Section 3.1.1, “Types of Snapshots”, whenever you run a YaST module or execute Zypper, a pre snapshot is created on start-up and a post snapshot is created when exiting. In case you have not made any changes there will be no difference between the pre and post snapshots. Such “empty” snapshot pairs can be automatically be deleted by setting the following parameters in a Snapper configuration:
EMPTY_PRE_POST_CLEANUP
If set to yes, pre and post snapshot pairs that do
not differ will be deleted.
The default value is "yes".
EMPTY_PRE_POST_MIN_AGE
Defines the minimum age in seconds a pre and post snapshot pair that does not differ must have before it can automatically be deleted.
The default value is "1800".
Snapper does not offer custom clean-up algorithms for manually created snapshots. However, you can assign the number or timeline clean-up algorithm to a manually created snapshot. If you do so, the snapshot will join the “clean-up queue” for the algorithm you specified. You can specify a clean-up algorithm when creating a snapshot, or by modifying an existing snapshot:
snapper create --description "Test" --cleanup-algorithm number
Creates a stand-alone snapshot (type single) for the default (root)
configuration and assigns the number clean-up
algorithm.
snapper modify --cleanup-algorithm "timeline" 25
Modifies the snapshot with the number 25 and assigns the clean-up
algorithm timeline.
In addition to the number and/or timeline clean-up algorithms described above, Snapper supports quotas. You can define what percentage of the available space snapshots are allowed to occupy. This percentage value always applies to the Btrfs subvolume defined in the respective Snapper configuration.
If Snapper was enabled during the installation, quota support is
automatically enabled. In case you manually enable Snapper at a later point
in time, you can enable quota support by running snapper
setup-quota. This requires a valid configuration (see
Section 3.4, “Creating and Modifying Snapper Configurations” for more information).
Quota support is controlled by the following parameters of a Snapper configuration.
QGROUP
The Btrfs quota group used by Snapper. If not set, run snapper
setup-quota. If already set, only change if you are familiar
with man 8 btrfs-qgroup. This value is set with
snapper setup-quota and should not be changed.
SPACE_LIMIT
Limit of space snapshots are allowed to use in fractions of 1 (100%). Valid values range from 0 to 1 (0.1 = 10%, 0.2 = 20%, ...).
The following limitations and guidelines apply:
Quotas are only activated in addition to an existing number and/or timeline clean-up algorithm. If no clean-up algorithm is active, quota restrictions are not applied.
With quota support enabled, Snapper will perform two clean-up runs if required. The first run will apply the rules specified for number and timeline snapshots. Only if the quota is exceeded after this run, the quota-specific rules will be applied in a second run.
Even if quota support is enabled, Snapper will always keep the number of
snapshots specified with the NUMBER_LIMIT* and
TIMELINE_LIMIT* values, even if the quota will be
exceeded. It is therefore recommended to specify ranged values
(MIN-MAX)
for NUMBER_LIMIT* and
TIMELINE_LIMIT* to ensure the quota can be applied.
If, for example, NUMBER_LIMIT=5-20 is set, Snapper
will perform a first clean-up run and reduce the number of regular
numbered snapshots to 20. In case these 20 snapshots exceed the
quota, Snapper will delete the oldest ones in a second run until the
quota is met. A minimum of five snapshots will always be kept, regardless
of the amount of space they occupy.
/var/log,
/tmp and Other Directories?
For some directories we decided to exclude them from snapshots. See Section 3.1.2, “Directories That Are Excluded from Snapshots” for a list and reasons. To exclude a path from snapshots we create a subvolume for that path.
Displaying the amount of disk space a snapshot allocates is currently not
supported by the Btrfs tools. However, if you have
quota enabled, it is possible to determine how much space would be freed
if all snapshots would be deleted:
Get the quota group ID (1/0 in the following
example):
tux >sudosnapper -c root get-config | grep QGROUP QGROUP | 1/0
Rescan the subvolume quotas:
tux >sudobtrfs quota rescan -w /
Show the data of the quota group (1/0 in the
following example):
tux >sudobtrfs qgroup show / | grep "1/0" 1/0 4.80GiB 108.82MiB
The third column shows the amount of space that would be freed when
deleting all snapshots (108.82MiB).
To free space on a Btrfs partition containing
snapshots you need to delete unneeded snapshots rather than files. Older
snapshots occupy more space than recent ones. See
Section 3.1.3.4, “Controlling Snapshot Archiving” for details.
Doing an upgrade from one service pack to another results in snapshots occupying a lot of disk space on the system subvolumes, because a lot of data gets changed (package updates). Manually deleting these snapshots after they are no longer needed is recommended. See Section 3.5.4, “Deleting Snapshots” for details.
Yes—refer to Section 3.3, “System Rollback by Booting from Snapshots” for details.
Currently Snapper does not offer means to prevent a snapshot from being
deleted manually. However, you can prevent snapshots from being
automatically deleted by clean-up algorithms. Manually created snapshots
(see Section 3.5.2, “Creating Snapshots”) have no clean-up
algorithm assigned unless you specify one with
--cleanup-algorithm. Automatically created snapshots
always either have the number or
timeline algorithm assigned. To remove such an
assignment from one or more snapshots, proceed as follows:
List all available snapshots:
tux >sudosnapper list -a
Memorize the number of the snapshot(s) you want to prevent from being deleted.
Run the following command and replace the number placeholders with the number(s) you memorized:
tux >sudosnapper modify --cleanup-algorithm "" #1 #2 #n
Check the result by running snapper list -a again.
The entry in the column Cleanup should now be empty
for the snapshots you modified.
See the Snapper home page at http://snapper.io/.
Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.
openSUSE Leap supports two different kinds of VNC sessions: One-time sessions that “live” as long as the VNC connection from the client is kept up, and persistent sessions that “live” until they are explicitly terminated.
A machine can offer both kinds of sessions simultaneously on different ports, but an open session cannot be converted from one type to the other.
sddm not Supported
A machine running KDE Plasma 5 can reliably accept VNC connections only if
it uses a display manager other than sddm. The
lightdm display manager can be used as an
alternative.
vncviewer Client #
To connect to a VNC service provided by a server, a client is needed. The
default in openSUSE Leap is vncviewer, provided by the
tigervnc package.
To start your VNC viewer and initiate a session with the server, use the command:
tux > vncviewer jupiter.example.com:1Instead of the VNC display number you can also specify the port number with two colons:
tux > vncviewer jupiter.example.com::5901
The actual display or port number you specify in the VNC client must be
the same as the display or port number picked by the
vncserver command on the target machine. See
Section 4.4, “Persistent VNC Sessions” for further info.
By running vncviewer without specifying
--listen or a host to connect to, it will show a window
to ask for connection details. Enter the host into the field like in Section 4.1.1, “Connecting Using the vncviewer CLI”
and click .
The VNC protocol supports different kinds of encrypted connections, not to be confused with password authentication. If a connection does not use TLS, the text “(Connection not encrypted!)” can be seen in the window title of the VNC viewer.
Remmina is a modern and feature rich remote desktop client. It supports several access methods, for example VNC, SSH, RDP, or Spice.
To use Remmina, verify whether the remmina package is installed on your system, and install it if not. Remember to install the VNC plug-in for Remmina as well:
root # zypper in remmina remmina-plugin-vnc
Run Remmina by entering the remmina command.
The main application window shows the list of stored remote sessions. Here you can add and save a new remote session, quick-start a new session without saving it, start a previously saved session, or set Remmina's global preferences.
To add and save a new remote session, click
in the top left of the main window. The
window opens.
Complete the fields that specify your newly added remote session profile. The most important are:
Name of the profile. It will be listed in the main window.
The protocol to use when connection to the remote session, for example VNC.
The IP or DNS address and display number of the remote server.
Credentials to use for remote authentication. Leave empty for no authentication.
Select the best options according to you connection speed and quality.
Select the tab to enter more specific settings.
If the communication between the client and the remote server is not encrypted, activate , otherwise the connection fails.
Select the tab for advanced SSH tunneling and authentication options.
Confirm with . Your new profile will be listed in the main window.
You can either start a previously saved session, or quick-start a remote session without saving the connection details.
To start a remote session quickly without proper adding and saving connection details, use the drop-down box and text field at the top of the main window.
Select the communication protocol from the drop-down box, for example 'VNC', then enter the VNC server DNS or IP address followed by a colon and a display number, and confirm with Enter.
To open a specific remote session, double-click it from the list of sessions.
Remote sessions are opened in tabs of a separate window. Each tab hosts one session. The toolbar on the left of the window helps you manage the windows / sessions, such as toggle fullscreen mode, resize the window to match the display size of the session, send specific keystrokes to the session, take screenshots of the session, or set the image quality.
To edit a saved remote session, right-click its name in the Remmina's main window and select . Refer to Section 4.2.3, “Adding Remote Sessions” for the description of the relevant fields.
To copy a saved remote session, right-click its name in the Remmina's main window and select . In the window, change the name of the profile, optionally adjust relevant options, and confirm with .
To Delete a saved remote session, right-click its name in the Remmina's main window and select . Confirm with in the next dialog.
If you need to open a remote session from the command line or from a batch file without first opening the main application window, use the following syntax:
tux > remmina -c profile_name.remmina
Remmina's profile files are stored in the
.local/share/remmina/ directory in your home
directory. To determine which profile file belongs to the session you want
to open, run Remmina, click the session name in the main window, and read
the path to the profile file in the window's status line at the bottom.
While Remmina is not running, you can rename the profile file to to a more
reasonable file name, such as sle15.remmina. You can
even copy the profile file to your custom directory and run it using the
remmina -c command from there.
A one-time session is initiated by the remote client. It starts a graphical login screen on the server. This way you can choose the user which starts the session and, if supported by the login manager, the desktop environment. When you terminate the client connection to such a VNC session, all applications started within that session will be terminated, too. One-time VNC sessions cannot be shared, but it is possible to have multiple sessions on a single host at the same time.
Start › › .
Check .
If necessary, also check (for example, when your network interface is configured to be in the External Zone). If you have more than one network interface, restrict opening the firewall ports to a specific interface via .
Confirm your settings with .
In case not all needed packages are available yet, you need to approve the installation of missing packages.
The default configuration on openSUSE Leap serves sessions with a
resolution of 1024x768 pixels at a color depth of 16-bit. The sessions are
available on ports 5901 for
“regular” VNC viewers (equivalent to VNC display
1) and on port
5801 for Web browsers.
Other configurations can be made available on different ports, see Section 4.3.3, “Configuring One-time VNC Sessions”.
VNC display numbers and X display numbers are independent in one-time sessions. A VNC display number is manually assigned to every configuration that the server supports (:1 in the example above). Whenever a VNC session is initiated with one of the configurations, it automatically gets a free X display number.
By default, both the VNC client and server try to communicate securely via a self-signed SSL certificate, which is generated after installation. You can either use the default one, or replace it with your own. When using the self-signed certificate, you need to confirm its signature before the first connection.
To connect to a persistent VNC session, a VNC viewer must be installed, see
also Section 4.1, “The vncviewer Client”.
You can skip this section, if you do not need or want to modify the default configuration.
One-time VNC sessions are started via the systemd socket
xvnc.socket. By default it offers six
configuration blocks: three for VNC viewers (vnc1 to
vnc3), and three serving a Java applet
(vnchttpd1 to vnchttpd3). By default
only vnc1 and vnchttpd1 are active.
To activate the VNC server socket at boot time, run the following command:
rootsystemctl enable xvnc.socketTo start the socket immediately, run:
rootsystemctl start xvnc.socket
The Xvnc server can be configured via the
server_args option. For a list of options, see
Xvnc --help.
When adding custom configurations, make sure they are not using ports that are already in use by other configurations, other services, or existing persistent VNC sessions on the same host.
Activate configuration changes by entering the following command:
tux >sudosystemctl reload xvnc.socket
When activating Remote Administration as described in
Procedure 4.1, “Enabling One-time VNC Sessions”, the ports
5801 and
5901 are opened in the firewall.
If the network interface serving the VNC sessions is protected by a
firewall, you need to manually open the respective ports when activating
additional ports for VNC sessions. See
Chapter 15, Masquerading and Firewalls for instructions.
A persistent VNC session is initiated on the server. The session and all applications started in this session run regardless of client connections until the session is terminated.
A persistent session can be accessed from multiple clients simultaneously. This is ideal for demonstration purposes where one client has full access and all other clients have view-only access. Another use case are trainings where the trainer might need access to the trainee's desktop. However, most of the times you probably do not want to share your VNC session.
In contrast to one-time sessions that start a display manager, a persistent session starts a ready-to-operate desktop that runs as the user that started the VNC session. Access to persistent sessions is protected by a password.
Access to persistent sessions is protected by two possible types of passwords:
a regular password that grants full access or
an optional view-only password that grants a non-interactive (view-only) access.
A session can have multiple client connections of both kinds at once.
Open a shell and make sure you are logged in as the user that should own the VNC session.
If the network interface serving the VNC sessions is protected by a firewall, you need to manually open the port used by your session in the firewall. If starting multiple sessions you may alternatively open a range of ports. See Chapter 15, Masquerading and Firewalls for details on how to configure the firewall.
vncserver uses the ports
5901 for display
:1, 5902 for
display :2, and so on. For persistent sessions, the VNC
display and the X display usually have the same number.
To start a session with a resolution of 1024x769 pixel and with a color depth of 16-bit, enter the following command:
tux > vncserver -geometry 1024x768 -depth 16
The vncserver command picks an unused display number
when none is given and prints its choice. See man 1
vncserver for more options.
When running vncserver for the first time, it asks for a
password for full access to the session. If needed, you can also provide a
password for view-only access to the session.
The password(s) you are providing here are also used for future sessions
started by the same user. They can be changed with the
vncpasswd command.
Make sure to use strong passwords of significant length (eight or more characters). Do not share these passwords.
To terminate the session shut down the desktop environment that runs inside the VNC session from the VNC viewer as you would shut it down if it was a regular local X session.
If you prefer to manually terminate a session, open a shell on the VNC
server and make sure you are logged in as the user that owns the VNC session
you want to terminate. Run the following command to terminate the session
that runs on display :1: vncserver -kill
:1
To connect to a persistent VNC session, a VNC viewer must be installed, see
also Section 4.1, “The vncviewer Client”.
Persistent VNC sessions can be configured by editing
$HOME/.vnc/xstartup. By default this shell script
starts the same GUI/window manager it was started from. In openSUSE Leap
this will either be GNOME or IceWM. If you want to start your session
with a window manager of your choice, set the variable
WINDOWMANAGER:
WINDOWMANAGER=gnome vncserver -geometry 1024x768 WINDOWMANAGER=icewm vncserver -geometry 1024x768
Persistent VNC sessions are configured in a single per-user configuration. Multiple sessions started by the same user will all use the same start-up and password files.
If the VNC server is set up properly, all communication between the VNC server and the client is encrypted. The authentication happens at the beginning of the session, the actual data transfer only begins afterward.
Whether for a one-time or a persistent VNC session, security options are
configured via the -securitytypes parameter of the
/usr/bin/Xvnc command located on the
server_args line. The -securitytypes
parameter selects both authentication method and encryption. It has the
following options:
No authentication.
Authentication using custom password.
Authentication using PAM to verify user's password.
No encryption.
Anonymous TLS encryption. Everything is encrypted, but there is no verification of the remote host. So you are protected against passive attackers, but not against man-in-the-middle attackers.
TLS encryption with certificate. If you use a self-signed certificate, you will be asked to verify it on the first connection. On subsequent connections you will be warned only if the certificate changed. So you are protected against everything except man-in-the-middle on the first connection (similar to typical SSH usage). If you use a certificate signed by a certificate authority matching the machine name, then you get full security (similar to typical HTTPS usage).
With X509 based encryption, you need to specify the path to the X509
certificate and the key with -X509Cert and
-X509Key options.
If you select multiple security types separated by comma, the first one supported and allowed by both client and server will be used. That way you can configure opportunistic encryption on the server. This is useful if you need to support VNC clients that do not support encryption.
On the client, you can also specify the allowed security types to prevent a downgrade attack if you are connecting to a server which you know has encryption enabled (although our vncviewer will warn you with the "Connection not encrypted!" message in that case).
Sophisticated system configurations require specific disk setups. All common
partitioning tasks can be done with YaST. To get persistent device naming
with block devices, use the block devices below
/dev/disk/by-id or
/dev/disk/by-uuid. Logical Volume Management (LVM) is a
disk partitioning scheme that is designed to be much more flexible than the
physical partitioning used in standard setups. Its snapshot functionality
enables easy creation of data backups. Redundant Array of Independent Disks
(RAID) offers increased data integrity, performance, and fault tolerance.
openSUSE Leap also supports multipath I/O . There is also the
option to use iSCSI as a networked disk.
With the expert partitioner, shown in Figure 5.1, “The YaST Partitioner”, manually modify the partitioning of one or several hard disks. You can add, delete, resize, and edit partitions, or access the soft RAID, and LVM configuration.
Although it is possible to repartition your system while it is running, the risk of making a mistake that causes data loss is very high. Try to avoid repartitioning your installed system and always createg a complete backup of your data before attempting to do so.
All existing or suggested partitions on all connected hard disks are
displayed in the list of in the YaST
dialog. Entire hard disks are listed as
devices without numbers, such as
/dev/sda. Partitions are listed as parts of
these devices, such as
/dev/sda1. The size, type,
encryption status, file system, and mount point of the hard disks and their
partitions are also displayed. The mount point describes where the partition
appears in the Linux file system tree.
Several functional views are available on the left hand . These views can be used to collect information about existing storage
configurations, configure functions (like RAID,
Volume Management, Crypt Files), and view file systems with additional features, such as Btrfs, NFS, or
TMPFS.
If you run the expert dialog during installation, any free hard disk space is also listed and automatically selected. To provide more disk space to openSUSE® Leap, free the needed space starting from the bottom toward the top of the list (starting from the last partition of a hard disk toward the first).
openSUSE Leap allows to use and create different partition tables. In some cases the partition table is called disk label. The partition table is important to the boot process of your computer. If you want to boot your machine from a partition in a newly created partition table, make sure that the table format is supported by the firmware.
To change the partition table, click the relevant disk name in the and choose › .
The master boot record (MBR) is the legacy partition table used on IBM PCs. It is sometimes also called a MS-DOS partition table. The MBR only supports 4 primary partitions. If the disk already has a MBR, openSUSE Leap allows you to create additional partitions in it which can be used as the installation target.
The limit of 4 partitions can be overcome by creating an extended partition. The extended partition itself is a primary partition and can contain more logical partitions.
UEFI firmwares usually support booting from MBR in the legacy mode.
UEFI computers use a GUID Partition Table (GPT) by default. openSUSE Leap will create a GPT on a disk if no other partition table exists.
Old BIOS firmwares do not support booting from GPT partitions.
You need a GPT partition table if you want to use one of the following features:
More than 4 primary partitions
UEFI Secure Boot
Use disks larger than 2 TB
On IBM z Systems platforms, SUSE Linux Enterprise Server supports SCSI hard disks and direct access storage devices (DASD). While SCSI disks can be partitioned as described above, DASDs can have no more than three partition entries in their partition tables.
The YaST Partitioner can create and format partitions with several
file systems. The default file system used by openSUSE Leap is
Btrfs. For details, see
Section 5.1.2.2, “Btrfs Partitioning”.
Other commonly used file systems are available:
Ext2, Ext3,
Ext4, FAT,
XFS and Swap.
To create a partition select and then a hard disk with free space. The actual modification can be done in the tab:
Click to create a new partition. When using MBR, specify if you want to create a primary or extended partition. Within the extended partition, you can create several logical partitions. For details, see Section 5.1.1, “Partition Tables”).
Specify the size of the new partition. You can either choose to occupy all the free unpartitioned space, or enter a custom size.
Select the file system to use and a mount point. YaST suggests a mount
point for each partition created. To use a different mount method, like
mount by label, select . For more
information on supported file systems, see
root.
Specify additional file system options if your setup requires them. This is necessary, for example, if you need persistent device names. For details on the available options, refer to Section 5.1.3, “Editing a Partition”.
Click to apply your partitioning setup and leave the partitioning module.
If you created the partition during installation, you are returned to the installation overview screen.
The default file system for the root partition is Btrfs (see Chapter 3, System Recovery and Snapshot Management with Snapper for more information on Btrfs). The root file system is the default subvolume and it is not listed in the list of created subvolumes. As a default Btrfs subvolume, it can be mounted as a normal file system.
The default partitioning setup suggests the root partition as Btrfs with
/boot being a directory. To encrypt the root
partition encrypted, make sure to use the GPT partition
table type instead of the default MSDOS type. Otherwise the GRUB2 boot
loader may not have enough space for the second stage loader.
It is possible to create snapshots of Btrfs subvolumes—either
manually, or automatically based on system events. For example when making
changes to the file system, zypper invokes the
snapper command to create snapshots before and after the
change. This is useful if you are not satisfied with the change
zypper made and want to restore the previous state. As
snapper invoked by zypper creates
snapshots of
the root file system by default, it makes sense to
exclude specific directories from being included into snapshots. This is the reason why YaST suggests creating the following
separate subvolumes.
/boot/grub2/i386-pc,
/boot/grub2/x86_64-efi,
/boot/grub2/powerpc-ieee1275,
/boot/grub2/s390x-emu
A rollback of the boot loader configuration is not supported. The directories listed above are architecture-specific. The first two directories are present on AMD64/Intel 64 machines, the latter two on IBM POWER and on IBM z Systems, respectively.
/home
If /home does not reside on a separate partition, it
is excluded to avoid data loss on rollbacks.
/opt, /var/opt
Third-party products usually get installed to /opt. It
is excluded to avoid uninstalling these applications on rollbacks.
/srv
Contains data for Web and FTP servers. It is excluded to avoid data loss on rollbacks.
/tmp, /var/tmp,
/var/cache, /var/crash
All directories containing temporary files and caches are excluded from snapshots.
/usr/local
This directory is used when manually installing software. It is excluded to avoid uninstalling these installations on rollbacks.
/var/lib/libvirt/images
The default location for virtual machine images managed with libvirt.
Excluded to ensure virtual machine images are not replaced with older
versions during a rollback. By default, this subvolume is created with the
option no copy on write.
/var/lib/mailman, /var/spool
Directories containing mails or mail queues are excluded to avoid a loss of mails after a rollback.
/var/lib/named
Contains zone data for the DNS server. Excluded from snapshots to ensure a name server can operate after a rollback.
/var/lib/mariadb,
/var/lib/mysql, /var/lib/pgqsl
These directories contain database data. By default, these subvolumes are
created with the option no copy on write.
/var/log
Log file location. Excluded from snapshots to allow log file analysis after the rollback of a broken system.
Since saved snapshots require more disk space, it is recommended to reserve enough space for Btrfs. Suggested size for a root Btrfs partition with default subvolumes is 20GB.
Subvolumes of a Btrfs partition can be now managed with the YaST module. You can add new or remove existing subvolumes.
Start the YaST with › .
Choose in the left pane.
Select the Btrfs partition whose subvolumes you need to manage and click .
Click . You can see a list off all
existing subvolumes of the selected Btrfs partition. You can notice
several @/.snapshots/xyz/snapshot entries—each
of these subvolumes belongs to one existing snapshot.
Depending on whether you want to add or remove subvolumes, do the following:
To remove a subvolume, select it from the list of and click .
To add a new subvolume, enter its name to the text box and click .
Confirm with and .
Leave the partitioner with .
When you create a new partition or modify an existing partition, you can set various parameters. For new partitions, the default parameters set by YaST are usually sufficient and do not require any modification. To edit your partition setup manually, proceed as follows:
Select the partition.
Click to edit the partition and set the parameters:
Even if you do not want to format the partition at this stage, assign it a file system ID to ensure that the partition is registered correctly. Typical values are , , , and .
To change the partition file system, click and select file system type in the list.
openSUSE Leap supports several types of file systems. Btrfs is the Linux file system of choice for the root partition because of its advanced features. It supports copy-on-write functionality, creating snapshots, multi-device spanning, subvolumes, and other useful techniques. XFS, Ext3 and JFS are journaling file systems. These file systems can restore the system very quickly after a system crash, using write processes logged during the operation. Ext2 is not a journaling file system, but it is adequate for smaller partitions because it does not require much disk space for management.
The default file system for the root partition is Btrfs. The default file system for additional partitions is XFS.
Swap is a special format that allows the partition to be used as a virtual memory. Create a swap partition of at least 256 MB. However, if you use up your swap space, consider adding more memory to your system instead of adding more swap space.
Changing the file system and reformatting partitions irreversibly deletes all data from the partition.
For details on the various file systems, refer to Storage Administration Guide.
If you activate the encryption, all data is written to the hard disk in encrypted form. This increases the security of sensitive data, but reduces the system speed, as the encryption takes some time to process. More information about the encryption of file systems is provided in Chapter 11, Encrypting Partitions and Files.
Specify the directory where the partition should be mounted in the file system tree. Select from YaST suggestions or enter any other name.
Specify various parameters contained in the global file system
administration file (/etc/fstab). The default
settings should suffice for most setups. You can, for example, change
the file system identification from the device name to a volume label.
In the volume label, use all characters except / and
space.
To get persistent devices names, use the mount option , or . In openSUSE Leap, persistent device names are enabled by default.
If you prefer to mount the partition by its label, you need to define
one in the text entry. For example, you
could use the partition label HOME for a partition
intended to mount to /home.
If you intend to use quotas on the file system, use the mount option . This must be done before you can define quotas for users in the YaST module. For further information on how to configure user quota, refer to Section 5.3.3, “Managing Quotas”.
Select to save the changes.
To resize an existing file system, select the partition and use . Note, that it is not possible to resize partitions while mounted. To resize partitions, unmount the relevant partition before running the partitioner.
After you select a hard disk device (like ) in the pane, you can access the menu in the lower right part of the window. The menu contains the following commands:
This option helps you create a new partition table on the selected device.
Creating a new partition table on a device irreversibly removes all the partitions and their data from that device.
This option helps you clone the device partition layout (but not the data) to other available disk devices.
After you select the host name of the computer (the top-level of the tree in the pane), you can access the menu in the lower right part of the window. The menu contains the following commands:
To access SCSI over IP block devices, you first need to configure iSCSI. This results in additionally available devices in the main partition list.
Selecting this option helps you configure the multipath enhancement to the supported mass storage devices.
The following section includes a few hints and tips on partitioning that should help you make the right decisions when setting up your system.
Note, that different partitioning tools may start counting the cylinders of
a partition with 0 or with 1. When
calculating the number of cylinders, you should always use the difference
between the last and the first cylinder number and add one.
swap #Swap is used to extend the available physical memory. It is then possible to use more memory than physical RAM available. The memory management system of kernels before 2.4.10 needed swap as a safety measure. Then, if you did not have twice the size of your RAM in swap, the performance of the system suffered. These limitations no longer exist.
Linux uses a page called “Least Recently Used” (LRU) to select pages that might be moved from memory to disk. Therefore, running applications have more memory available and caching works more smoothly.
If an application tries to allocate the maximum allowed memory, problems with swap can arise. There are three major scenarios to look at:
The application gets the maximum allowed memory. All caches are freed, and thus all other running applications are slowed. After a few minutes, the kernel's out-of-memory kill mechanism activates and kills the process.
At first, the system slows like a system without swap. After all physical RAM has been allocated, swap space is used as well. At this point, the system becomes very slow and it becomes impossible to run commands from remote. Depending on the speed of the hard disks that run the swap space, the system stays in this condition for about 10 to 15 minutes until the out-of-memory kill mechanism resolves the issue. Note that you will need a certain amount of swap if the computer needs to perform a “suspend to disk”. In that case, the swap size should be large enough to contain the necessary data from memory (512 MB–1GB).
It is better to not have an application that is out of control and swapping excessively in this case. If you use such application, the system will need many hours to recover. In the process, it is likely that other processes get timeouts and faults, leaving the system in an undefined state, even after terminating the faulty process. In this case, do a hard machine reboot and try to get it running again. Lots of swap is only useful if you have an application that relies on this feature. Such applications (like databases or graphics manipulation programs) often have an option to directly use hard disk space for their needs. It is advisable to use this option instead of using lots of swap space.
If your system is not out of control, but needs more swap after some time, it is possible to extend the swap space online. If you prepared a partition for swap space, add this partition with YaST. If you do not have a partition available, you can also use a swap file to extend the swap. Swap files are generally slower than partitions, but compared to physical RAM, both are extremely slow so the actual difference is negligible.
To add a swap file in the running system, proceed as follows:
Create an empty file in your system. For example, if you want to add a
swap file with 128 MB swap at
/var/lib/swap/swapfile, use the commands:
tux >sudomkdir -p /var/lib/swaptux >sudodd if=/dev/zero of=/var/lib/swap/swapfile bs=1M count=128
Initialize this swap file with the command
tux >sudomkswap /var/lib/swap/swapfile
mkswapDo not reformat existing swap partitions with mkswap
if possible. Reformatting with mkswap will change
the UUID value of the swap partition. Either reformat via YaST (will
update /etc/fstab) or adjust
/etc/fstab manually.
Activate the swap with the command
tux >sudoswapon /var/lib/swap/swapfile
To disable this swap file, use the command
tux >sudoswapoff /var/lib/swap/swapfile
Check the current available swap spaces with the command
tux > cat /proc/swapsNote that at this point, it is only temporary swap space. After the next reboot, it is no longer used.
To enable this swap file permanently, add the following line to
/etc/fstab:
/var/lib/swap/swapfile swap swap defaults 0 0
From the , access the LVM configuration by clicking the item in the pane. However, if a working LVM configuration already exists on your system, it is automatically activated upon entering the initial LVM configuration of a session. In this case, all disks containing a partition (belonging to an activated volume group) cannot be repartitioned. The Linux kernel cannot reread the modified partition table of a hard disk when any partition on this disk is in use. If you already have a working LVM configuration on your system, physical repartitioning should not be necessary. Instead, change the configuration of the logical volumes.
At the beginning of the physical volumes (PVs), information about the volume
is written to the partition. To reuse such a partition for other non-LVM
purposes, it is advisable to delete the beginning of this volume. For
example, in the VG system and PV
/dev/sda2, do this with the command
dd if=/dev/zero of=/dev/sda2 bs=512
count=1.
The file system used for booting (the root file system or
/boot) must not be stored on an LVM logical volume.
Instead, store it on a normal physical partition.
To change your /usr or
swap, refer to
Procedure 9.1, “Updating Init RAM Disk When Switching to Logical Volumes”.
This section explains specific steps to take when configuring LVM.
Using LVM is sometimes associated with increased risk such as data loss. Risks also include application crashes, power failures, and faulty commands. Save your data before implementing LVM or reconfiguring volumes. Never work without a backup.
The YaST LVM configuration can be reached from the YaST Expert Partitioner (see Section 5.1, “Using the YaST Partitioner”) within the item in the pane. The Expert Partitioner allows you to edit and delete existing partitions and create new ones that need to be used with LVM. The first task is to create PVs that provide space to a volume group:
Select a hard disk from .
Change to the tab.
Click and enter the desired size of the PV on this disk.
Use and change the to . Do not mount this partition.
Repeat this procedure until you have defined all the desired physical volumes on the available disks.
If no volume group exists on your system, you must add one (see Figure 5.3, “Creating a Volume Group”). It is possible to create additional groups by clicking in the pane, and then on . One single volume group is usually sufficient.
Enter a name for the VG, for example, system.
Select the desired . This value defines the size of a physical block in the volume group. All the disk space in a volume group is handled in blocks of this size.
Add the prepared PVs to the VG by selecting the device and clicking . Selecting several devices is possible by holding Ctrl while selecting the devices.
Select to make the VG available to further configuration steps.
If you have multiple volume groups defined and want to add or remove PVs, select the volume group in the list and click . In the following window, you can add or remove PVs to the selected volume group.
After the volume group has been filled with PVs, define the LVs which the operating system should use in the next dialog. Choose the current volume group and change to the tab. , , , and LVs as needed until all space in the volume group has been occupied. Assign at least one LV to each volume group.
Click and go through the wizard-like pop-up that opens:
Enter the name of the LV. For a partition that should be mounted to
/home, a name like HOME could be
used.
Select the type of the LV. It can be either , , or . Note that you need to create a thin pool first, which can store individual thin volumes. The big advantage of thin provisioning is that the total sum of all thin volumes stored in a thin pool can exceed the size of the pool itself.
Select the size and the number of stripes of the LV. If you have only one PV, selecting more than one stripe is not useful.
Choose the file system to use on the LV and the mount point.
By using stripes it is possible to distribute the data stream in the LV among several PVs (striping). However, striping a volume can only be done over different PVs, each providing at least the amount of space of the volume. The maximum number of stripes equals to the number of PVs, where Stripe "1" means "no striping". Striping only makes sense with PVs on different hard disks, otherwise performance will decrease.
YaST cannot, at this point, verify the correctness of your entries concerning striping. Any mistake made here is apparent only later when the LVM is implemented on disk.
If you have already configured LVM on your system, the existing logical volumes can also be used. Before continuing, assign appropriate mount points to these LVs. With , return to the YaST Expert Partitioner and finish your work there.
This section describes actions required to create and configure various types of RAID. .
The YaST configuration can be reached from the YaST Expert Partitioner, described in Section 5.1, “Using the YaST Partitioner”. This partitioning tool enables you to edit and delete existing partitions and create new ones to be used with soft RAID:
Select a hard disk from .
Change to the tab.
Click and enter the desired size of the raid partition on this disk.
Use and change the to . Do not mount this partition.
Repeat this procedure until you have defined all the desired physical volumes on the available disks.
For RAID 0 and RAID 1, at least two partitions are needed—for RAID 1, usually exactly two and no more. If RAID 5 is used, at least three partitions are required, RAID 6 and RAID 10 require at least four partitions. It is recommended to use partitions of the same size only. The RAID partitions should be located on different hard disks to decrease the risk of losing data if one is defective (RAID 1 and 5) and to optimize the performance of RAID 0. After creating all the partitions to use with RAID, click › to start the RAID configuration.
In the next dialog, choose between RAID levels 0, 1, 5, 6 and 10. Then, select all partitions with either the “Linux RAID” or “Linux native” type that should be used by the RAID system. No swap or DOS partitions are shown.
For RAID types where the order of added disks matters, you can mark individual disks with one of the letters A to E. Click the button, select the disk and click of the buttons, where X is the letter you want to assign to the disk. Assign all available RAID disks this way, and confirm with . You can easily sort the classified disks with the or buttons, or add a sort pattern from a text file with .
To add a previously unassigned partition to the selected RAID volume, first click the partition then . Assign all partitions reserved for RAID. Otherwise, the space on the partition remains unused. After assigning all partitions, click to select the available .
In this last step, set the file system to use, encryption and the mount
point for the RAID volume. After completing the configuration with
, see the /dev/md0 device and
others indicated with RAID in the expert partitioner.
Check the file /proc/mdstat to find out whether a RAID
partition has been damaged. If Th system fails, shut down your Linux system
and replace the defective hard disk with a new one partitioned the same way.
Then restart your system and enter the command mdadm /dev/mdX --add
/dev/sdX. Replace 'X' with your particular device identifiers.
This integrates the hard disk automatically into the RAID system and fully
reconstructs it.
Note that although you can access all data during the rebuild, you may encounter some performance issues until the RAID has been fully rebuilt.
Configuration instructions and more details for soft RAID can be found in the HOWTOs at:
/usr/share/doc/packages/mdadm/Software-RAID.HOWTO.html
Linux RAID mailing lists are available, such as http://marc.info/?l=linux-raid.
openSUSE Leap supports the parallel installation of multiple kernel versions. When installing a second kernel, a boot entry and an initrd are automatically created, so no further manual configuration is needed. When rebooting the machine, the newly added kernel is available as an additional boot option.
Using this functionality, you can safely test kernel updates while being able to always fall back to the proven former kernel. To do this, do not use the update tools (such as the YaST Online Update or the updater applet), but instead follow the process described in this chapter.
It is recommended to check your boot loader configuration after having installed another kernel to set the default boot entry of your choice. See Section 12.3, “Configuring the Boot Loader with YaST” for more information.
Installing multiple versions of a software package (multiversion support) is enabled by default since openSUSE Leap. To verify this setting, proceed as follows:
Open /etc/zypp/zypp.conf with the editor of your
choice as root.
Search for the string multiversion. If multiversion is
enabled for all kernel packages capable of this feature, the following
line appears uncommented:
multiversion = provides:multiversion(kernel)
To restrict multiversion support to certain kernel flavors, add the
package names as a comma-separated list to the
multiversion option in
/etc/zypp/zypp.conf—for example
multiversion = kernel-default,kernel-default-base,kernel-source
Save your changes.
Make sure that required vendor provided kernel modules (Kernel Module Packages) are also installed for the new updated kernel. The kernel update process will not warn about eventually missing kernel modules because package requirements are still fulfilled by the old kernel that is kept on the system.
When frequently testing new kernels with multiversion support enabled, the
boot menu quickly becomes confusing. Since a /boot
partition usually has limited space you also might run into trouble with
/boot overflowing. While you can delete unused kernel
versions manually with YaST or Zypper (as described below), you can also
configure libzypp to automatically
delete kernels no longer used. By default no kernels are deleted.
Open /etc/zypp/zypp.conf with the editor of your
choice as root.
Search for the string multiversion.kernels and
activate this option by uncommenting the line. This option takes a
comma-separated list of the following values:
3.12.24-7.1:
keep the kernel with the specified version number
latest:
keep the kernel with the highest version number
latest-N:
keep the kernel with the Nth highest version number
running:
keep the running kernel
oldest:
keep the kernel with the lowest version number (the one that was
originally shipped with openSUSE Leap)
oldest+N.
keep the kernel with the Nth lowest version number
Here are some examples
multiversion.kernels = latest,running
Keep the latest kernel and the one currently running. This is similar to not enabling the multiversion feature, except that the old kernel is removed after the next reboot and not immediately after the installation.
multiversion.kernels = latest,latest-1,running
Keep the last two kernels and the one currently running.
multiversion.kernels = latest,running,3.12.25.rc7-test
Keep the latest kernel, the one currently running, and 3.12.25.rc7-test.
Unless you are using a special setup, always keep the
kernel marked running.
If you do not keep the running kernel, it will be deleted when updating the kernel. In turn, this means that all of the running kernel's modules are also deleted and cannot be loaded anymore.
If you decide not to keep the running kernel, always reboot immediately after a kernel upgrade to avoid issues with modules.
You want to make sure that an old kernel will only be deleted after the system has rebooted successfully with the new kernel.
Change the following line in /etc/zypp/zypp.conf:
multiversion.kernels = latest,running
The previous parameters tell the system to keep the latest kernel and the running one only if they differ.
You want to keep one or more kernel versions to have one or more “spare” kernels.
This can be useful if you need kernels for testing. If something goes wrong (for example, your machine does not boot), you still can use one or more kernel versions which are known to be good.
Change the following line in /etc/zypp/zypp.conf:
multiversion.kernels = latest,latest-1,latest-2,running
When you reboot your system after the installation of a new kernel, the
system will keep three kernels: the current kernel (configured as
latest,running) and its two immediate predecessors
(configured as latest-1 and latest-2).
You make regular system updates and install new kernel versions. However, you are also compiling your own kernel version and want to make sure that the system will keep them.
Change the following line in /etc/zypp/zypp.conf:
multiversion.kernels = latest,3.12.28-4.20,running
When you reboot your system after the installation of a new kernel, the
system will keep two kernels: the new and running kernel (configured as
latest,running) and your self-compiled kernel
(configured as 3.12.28-4.20).
Start YaST and open the software manager via › .
List all packages capable of providing multiple versions by choosing › › .
Select a package and open its tab in the bottom pane on the left.
To install a package, click the check box next to it. A green check mark indicates it is selected for installation.
To remove an already installed package (marked with a white check mark),
click the check box next to it until a red X indicates it is
selected for removal.
Click to start the installation.
Use the command zypper se -s 'kernel*' to display a
list of all kernel packages available:
S | Name | Type | Version | Arch | Repository --+----------------+------------+-----------------+--------+------------------- v | kernel-default | package | 2.6.32.10-0.4.1 | x86_64 | Alternative Kernel i | kernel-default | package | 2.6.32.9-0.5.1 | x86_64 | (System Packages) | kernel-default | srcpackage | 2.6.32.10-0.4.1 | noarch | Alternative Kernel i | kernel-default | package | 2.6.32.9-0.5.1 | x86_64 | (System Packages) ...
Specify the exact version when installing:
tux >sudozypper in kernel-default-2.6.32.10-0.4.1
When uninstalling a kernel, use the commands zypper se -si
'kernel*' to list all kernels installed and zypper
rm PACKAGENAME-VERSION to remove the
package.
Kernel:HEAD
#
Add the Kernel:HEAD repository with (the repository
is added using the alias kernel-repo):
tux >sudozypper ar \ http://download.opensuse.org/repositories/Kernel:/HEAD/standard/ \ kernel-repo
To refresh repositories, run:
tux >sudozypper ref
To upgrade the kernel to the latest version in the
Kernel:HEAD repository, run:
tux >sudozypper dist-upgrade --from kernel-repo
Reboot the machine.
Kernel:HEAD May Break the System
Installing a kernel from Kernel:HEAD should never be
necessary, because important fixes are backported by SUSE and are made
available as official updates. Installing the latest kernel only makes
sense for kernel developers and kernel testers. If installing from
Kernel:HEAD, be aware that it may break your system.
Make sure to always have the original kernel available for booting as
well.
This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.
These configuration options are stored in the GConf system. Access the
GConf system with tools such as the gconftool-2 command
line interface or the gconf-editor GUI tool.
To automatically start applications in GNOME, use one of the following methods:
To run applications for each user:
Put .desktop files in
/usr/share/gnome/autostart.
To run applications for an individual user:
Put .desktop files in
~/.config/autostart.
To disable an application that starts automatically, add
X-Autostart-enabled=false to the
.desktop file.
GNOME Files (nautilus) monitors volume-related events and
responds with a user-specified policy. You can use GNOME Files to
automatically mount hotplugged drives and inserted removable media,
automatically run programs, and play audio CDs or video DVDs. GNOME Files
can also automatically import photos from a digital camera.
System administrators can set system-wide defaults. For more information, see Section 7.3, “Changing Preferred Applications”.
To change users' preferred applications, edit
/etc/gnome_defaults.conf. Find further hints within
this file.
For more information about MIME types, see http://www.freedesktop.org/Standards/shared-mime-info-spec.
To add document templates for users, fill in the
Templates directory in a user's home directory. You
can do this manually for each user by copying the files into
~/Templates, or system-wide by adding a
Templates directory with documents to
/etc/skel before the user is created.
A user creates a new document from a template by right-clicking the desktop and selecting .
For more information, see http://help.gnome.org/admin/.
openSUSE® Leap is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. openSUSE Leap supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this sup…
Booting a Linux system involves different components and tasks. The
hardware itself is initialized by the BIOS or the UEFI, which starts the
kernel by means of a boot loader. After this point, the boot process is
completely controlled by the operating system and handled by systemd.
systemd provides a set of “targets” that boot setups for
everyday usage, maintenance or emergencies.
systemd DaemonThe program systemd is the process with process ID 1. It is responsible for initializing the system in the required way. systemd is started directly by the kernel and resists signal 9, which normally terminates processes. All other programs are either started directly by systemd or by one of its chi…
journalctl: Query the systemd Journal
When systemd replaced traditional init scripts in openSUSE Leap
(see Chapter 10, The systemd Daemon), it introduced its own logging system
called journal. There is no need to run a
syslog based service anymore, as all system events
are written in the journal.
This chapter describes how to configure GRUB 2, the boot loader used in openSUSE® Leap. It is the successor to the traditional GRUB boot loader—now called “GRUB Legacy”. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 9, Introduction to the Booting Process. For details on Secure Boot support for UEFI machines, see Chapter 14, UEFI (Unified Extensible Firmware Interface).
Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.
UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.
This chapter starts with information about various software packages, the
virtual consoles and the keyboard layout. We talk about software components
like bash,
cron and
logrotate, because they were
changed or enhanced during the last release cycles. Even if they are small
or considered of minor importance, users should change their default
behavior, because these components are often closely coupled with the
system. The chapter concludes with a section about language and
country-specific settings (I18N and L10N).
udevThe kernel can add or remove almost any device in a running system. Changes in the device state (whether a device is plugged in or removed) need to be propagated to user space. Devices need to be configured when they are plugged in and recognized. Users of a certain device need to be informed about …
openSUSE® Leap is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. openSUSE Leap supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this support is implemented on 64-bit openSUSE Leap platforms. It explains how 32-bit applications are executed and how 32-bit applications should be compiled to enable them to run both in 32-bit and 64-bit system environments. Additionally, find information about the kernel API and an explanation of how 32-bit applications can run under a 64-bit kernel.
openSUSE Leap for the 64-bit platforms amd64 and Intel 64 is designed so that existing 32-bit applications run in the 64-bit environment “out-of-the-box.” This support means that you can continue to use your preferred 32-bit applications without waiting for a corresponding 64-bit port to become available.
If an application is available both for 32-bit and 64-bit environments, parallel installation of both versions is bound to lead to problems. In such cases, decide on one of the two versions and install and use this.
An exception to this rule is PAM (pluggable authentication modules). openSUSE Leap uses PAM in the authentication process as a layer that mediates between user and application. On a 64-bit operating system that also runs 32-bit applications it is necessary to always install both versions of a PAM module.
To be executed correctly, every application requires a range of libraries. Unfortunately, the names for the 32-bit and 64-bit versions of these libraries are identical. They must be differentiated from each other in another way.
To retain compatibility with the 32-bit version, the libraries are stored at
the same place in the system as in the 32-bit environment. The 32-bit
version of libc.so.6 is located under
/lib/libc.so.6 in both the 32-bit and 64-bit
environments.
All 64-bit libraries and object files are located in directories called
lib64. The 64-bit object files that you would normally
expect to find under /lib and
/usr/lib are now found under
/lib64 and /usr/lib64. This means
that there is space for the 32-bit libraries under /lib
and /usr/lib, so the file name for both versions can
remain unchanged.
Subdirectories of 32-bit /lib directories which contain
data content that does not depend on the word size are not moved. This
scheme conforms to LSB (Linux Standards Base) and FHS (File System Hierarchy
Standard).
All 64-bit architectures support the development of 64-bit objects. The
level of support for 32-bit compiling depends on the architecture. These are
the various implementation options for the toolchain from GCC (GNU Compiler
Collection) and binutils, which include the assembler as
and the linker ld:
Both 32-bit and 64-bit objects can be generated with a biarch development
toolchain. A biarch development toolchain allows generation of 32-bit and
64-bit objects. The compilation of 64-bit objects is the default on almost
all platforms. 32-bit objects can be generated if special flags are used.
This special flag is -m32 for GCC. The flags for the
binutils are architecture-dependent, but GCC transfers the correct flags to
linkers and assemblers. A biarch development toolchain currently exists for
amd64 (supports development for x86 and amd64 instructions), for z Systems
and for POWER. 32-bit objects are normally created on the POWER
platform. The -m64 flag must be used to generate 64-bit
objects.
All header files must be written in an architecture-independent form. The installed 32-bit and 64-bit libraries must have an API (application programming interface) that matches the installed header files. The normal openSUSE Leap environment is designed according to this principle. In the case of manually updated libraries, resolve these issues yourself.
To develop binaries for the other architecture on a biarch architecture, the
respective libraries for the second architecture must additionally be
installed. These packages are called
rpmname-32bit. You also need the respective
headers and libraries from the
rpmname-devel packages and the
development libraries for the second architecture from
rpmname-devel-32bit.
For example, to compile a program that uses libaio on a
system with a 32-bit second architecture (x86_64), you need the following
RPMs:
32-bit runtime package
Headers and libraries for 32-bit development
64-bit runtime package
64-bit development headers and libraries
Most open source programs use an autoconf-based program
configuration. To use autoconf for configuring a program
for the second architecture, overwrite the normal compiler and linker
settings of autoconf by running the
configure script with additional environment variables.
The following example refers to an x86_64 system with x86 as the second architecture.
Use the 32-bit compiler:
CC="gcc -m32"
Instruct the linker to process 32-bit objects (always use
gcc as the linker front-end):
LD="gcc -m32"
Set the assembler to generate 32-bit objects:
AS="gcc -c -m32"
Specify linker flags, such as the location of 32-bit libraries, for example:
LDFLAGS="-L/usr/lib"
Specify the location for the 32-bit object code libraries:
--libdir=/usr/lib
Specify the location for the 32-bit X libraries:
--x-libraries=/usr/lib
Not all of these variables are needed for every program. Adapt them to the respective program.
An example configure call to compile a native 32-bit
application on x86_64 could
appear as follows:
CC="gcc -m32" LDFLAGS="-L/usr/lib;" ./configure --prefix=/usr --libdir=/usr/lib --x-libraries=/usr/lib make make install
The 64-bit kernels for AMD64/Intel 64 offer both a 64-bit and a 32-bit kernel ABI (application binary interface). The latter is identical with the ABI for the corresponding 32-bit kernel. This means that the 32-bit application can communicate with the 64-bit kernel in the same way as with the 32-bit kernel.
The 32-bit emulation of system calls for a 64-bit kernel does not support
all the APIs used by system programs. This depends on the platform. For this
reason, few applications, like lspci, must be
compiled.
A 64-bit kernel can only load 64-bit kernel modules that have been specially compiled for this kernel. It is not possible to use 32-bit kernel modules.
Some applications require separate kernel-loadable modules. If you intend to use such a 32-bit application in a 64-bit system environment, contact the provider of this application and SUSE to make sure that the 64-bit version of the kernel-loadable module and the 32-bit compiled version of the kernel API are available for this module.
Booting a Linux system involves different components and tasks. The
hardware itself is initialized by the BIOS or the UEFI, which starts the
kernel by means of a boot loader. After this point, the boot process is
completely controlled by the operating system and handled by systemd.
systemd provides a set of “targets” that boot setups for
everyday usage, maintenance or emergencies.
The Linux boot process consists of several stages, each represented by a different component. The following list briefly summarizes the boot process and features all the major components involved:
BIOS/UEFI. After turning on the computer, the BIOS or the UEFI initializes the screen and keyboard, and tests the main memory. Up to this stage, the machine does not access any mass storage media. Subsequently, the information about the current date, time, and the most important peripherals are loaded from the CMOS values. When the first hard disk and its geometry are recognized, the system control passes from the BIOS to the boot loader. If the BIOS supports network booting, it is also possible to configure a boot server that provides the boot loader. On AMD64/Intel 64 systems, PXE boot is needed. Other architectures commonly use the BOOTP protocol to get the boot loader. For more information on UEFI, refer to Chapter 14, UEFI (Unified Extensible Firmware Interface).
Boot Loader. The first physical 512-byte data sector of the first hard disk is loaded into the main memory and the boot loader that resides at the beginning of this sector takes over. The commands executed by the boot loader determine the remaining part of the boot process. Therefore, the first 512 bytes on the first hard disk are called the Master Boot Record (MBR). The boot loader then passes control to the actual operating system, in this case, the Linux kernel. More information about GRUB 2, the Linux boot loader, can be found in Chapter 12, The Boot Loader GRUB 2. For a network boot, the BIOS acts as the boot loader. It gets the boot image from the boot server and starts the system. This is completely independent of local hard disks.
If the root file system fails to mount from within the boot environment, it must be checked and repaired before the boot can continue. The file system checker will be automatically started for Ext3 and Ext4 file systems. The repair process is not automated for XFS and Btrfs file systems, and the user is be presented with information describing the options available to repair the file system. When the file system has been successfully repaired, exiting the boot environment will cause the system to retry mounting the root file system. If successful, the boot will continue normally.
Kernel and initramfs.
To pass system control, the boot loader loads both the kernel and an
initial RAM-based file system (initramfs) into
memory. The contents of the initramfs can be
used by the kernel directly. initramfs contains
a small executable called init that handles the
mounting of the real root file system. If special hardware drivers are
needed before the mass storage can be accessed, they must be in
initramfs. For more information about
initramfs, refer to
Section 9.2, “initramfs”. If the system does not have a
local hard disk, the initramfs must provide the
root file system for the kernel. This can be done using a network block
device like iSCSI or SAN, but it is also possible to use NFS as the root
device.
init Process NamingTwo different programs are commonly named “init”:
the initramfs process mounting the root file
system
the operating system process setting up the system
In this chapter we will therefore refer to them as
“init on
initramfs” and “systemd”,
respectively.
init on initramfs.
This program performs all actions needed to mount the proper root file
system. It provides kernel functionality for the needed file system and
device drivers for mass storage controllers with
udev. After the root file system
has been found, it is checked for errors and mounted. If this is
successful, the initramfs is cleaned and the
systemd daemon on the root file system is executed. For more
information about init on
initramfs, refer to
Section 9.3, “Init on initramfs”. Find more information about
udev in
Chapter 16, Dynamic Kernel Device Management with udev.
systemd.
By starting services and mounting file systems, systemd handles the
actual booting of the system. systemd is described in
Chapter 10, The systemd Daemon.
initramfs #
initramfs is a small cpio archive that the kernel
can load into a RAM disk. It provides a minimal Linux environment that
enables the execution of programs before the actual root file system is
mounted. This minimal Linux environment is loaded into memory by BIOS or
UEFI routines and does not have specific hardware requirements other than
sufficient memory. The initramfs archive must
always provide an executable named init that
executes the systemd daemon on the root file system for the boot process
to proceed.
Before the root file system can be mounted and the operating system can be
started, the kernel needs the corresponding drivers to access the device on
which the root file system is located. These drivers may include special
drivers for certain kinds of hard disks or even network drivers to access a
network file system. The needed modules for the root file system may be
loaded by init on
initramfs. After the modules are loaded,
udev provides the
initramfs with the needed devices. Later in the
boot process, after changing the root file system, it is necessary to
regenerate the devices. This is done by the systemd unit
udev.service with the command
udevtrigger.
If you need to change hardware (for example, hard disks), and this hardware requires different drivers to be in the kernel at
boot time, you must update the initramfs file. This
is done by calling dracut -f (the option
-f overwrites the existing initramfs file). To add a driver
for the new hardware, edit
/etc/dracut.conf.d/01-dist.conf and add the following
line. If the file does not exist, create it.
force_drivers+="DRIVER1"
Replace DRIVER1 with the module name of the
driver. If you need to add more than one driver, list them space-separated
(DRIVER1
DRIVER2).
initramfs or init
The boot loader loads initramfs or
init in the same way as the kernel. It is not
necessary to re-install GRUB 2 after updating
initramfs or init,
because GRUB 2 searches the directory for the right file when booting.
If you change the values of kernel variables via the
sysctl interface by editing related files
(/etc/sysctl.conf or
/etc/sysctl.d/*.conf), the change will be lost on the
next system reboot. Even if you load the values with sysctl
--system at runtime, the changes are not saved into the initramfs
file. You need to update it by calling dracut
-f (the option -f overwrites the existing
initramfs file).
initramfs #
The main purpose of init on
initramfs is to prepare the mounting of and access
to the real root file system. Depending on your system configuration,
init on initramfs is
responsible for the following tasks.
Depending on your hardware configuration, special drivers may be needed to access the hardware components of your computer (the most important component being your hard disk). To access the final root file system, the kernel needs to load the proper file system drivers.
For each loaded module, the kernel generates device events.
udev handles these events and
generates the required special block files on a RAM file system in
/dev. Without those special files, the file system
and other devices would not be accessible.
If you configured your system to hold the root file system under RAID or
LVM, init on initramfs
sets up LVM or RAID to enable access to the root file system later.
To change your /usr or
swap partitions directly without the help of
YaST, further actions are needed. If you forget these steps, your
system will start in emergency mode. To avoid starting in emergency mode,
perform the following steps:
Edit the corresponding entry in /etc/fstab and
replace your previous partitions with the logical volume.
Execute the following commands:
root #mount-aroot #swapon-a
Regenerate your initial RAM disk (initramfs) with
mkinitrd or dracut.
For z Systems, additionally run grub2-install.
Find more information about RAID and LVM in Chapter 5, Advanced Disk Setup.
If you configured your system to use a network-mounted root file system
(mounted via NFS), init on
initramfs must make sure that the proper network
drivers are loaded and that they are set up to allow access to the root
file system.
If the file system resides on a network block device like iSCSI or SAN,
the connection to the storage server is also set up by
init on initramfs.
openSUSE Leap supports booting from a secondary iSCSI target if the
primary target is not available. .
When init on initramfs is
called during the initial boot as part of the installation process, its
tasks differ from those mentioned above:
When starting the installation process, your machine loads an
installation kernel and a special init
containing the YaST installer. The YaST installer is running in a RAM
file system and needs to have information about the location of the
installation medium to access it for installing the operating system.
As mentioned in Section 9.2, “initramfs”, the boot process
starts with a minimum set of drivers that can be used with most hardware
configurations. init starts an initial hardware
scanning process that determines the set of drivers suitable for your
hardware configuration. These drivers are used to generate a custom
initramfs that is needed to boot the system. If
the modules are not needed for boot but for coldplug, the modules can be
loaded with systemd; for more information, see
Section 10.6.4, “Loading Kernel Modules”.
When the hardware is properly recognized, the appropriate drivers are
loaded. The udev program creates
the special device files and init starts the
installation system with the YaST installer.
Finally, init starts YaST, which starts
package installation and system configuration.
systemd Daemon #
The program systemd is the process with process ID 1. It is responsible for
initializing the system in the required way. systemd is started directly by
the kernel and resists signal 9, which normally terminates processes.
All other programs are either started directly by systemd or by one of its
child processes.
Starting with openSUSE Leap 12 systemd is a replacement for the popular
System V init daemon. systemd is fully compatible with System V init (by
supporting init scripts). One of the main advantages of systemd is that it
considerably speeds up boot time by aggressively paralleling service starts.
Furthermore, systemd only starts a service when it is really needed. Daemons
are not started unconditionally at boot time, but rather when being required
for the first time. systemd also supports Kernel Control Groups (cgroups),
snapshotting and restoring the system state and more. See
http://www.freedesktop.org/wiki/Software/systemd/ for
details.
This section will go into detail about the concept behind systemd.
systemd is a system and session manager for Linux, compatible with System V and LSB init scripts. The main features are:
provides aggressive parallelization capabilities
uses socket and D-Bus activation for starting services
offers on-demand starting of daemons
keeps track of processes using Linux cgroups
supports snapshotting and restoring of the system state
maintains mount and automount points
implements an elaborate transactional dependency-based service control logic
A unit configuration file contains information about a service, a socket, a device, a mount point, an automount point, a swap file or partition, a start-up target, a watched file system path, a timer controlled and supervised by systemd, a temporary system state snapshot, a resource management slice or a group of externally created processes. “Unit file” is a generic term used by systemd for the following:
Service. Information about a process (for example running a daemon); file ends with .service
Targets. Used for grouping units and as synchronization points during start-up; file ends with .target
Sockets.
Information about an IPC or network socket or a file system FIFO, for
socket-based activation (like
inetd); file ends with .socket
Path. Used to trigger other units (for example running a service when files change); file ends with .path
Timer. Information about a timer controlled, for timer-based activation; file ends with .timer
Mount point. Usually auto-generated by the fstab generator; file ends with .mount
Automount point. Information about a file system automount point; file ends with .automount
Swap. Information about a swap device or file for memory paging; file ends with .swap
Device. Information about a device unit as exposed in the sysfs/udev(7) device tree; file ends with .device
Scope / Slice. A concept for hierarchically managing resources of a group of processes; file ends with .scope/.slice
For more information about systemd.unit see http://www.freedesktop.org/software/systemd/man/systemd.unit.html
The System V init system uses several commands to handle services—the
init scripts, insserv, telinit and
others. systemd makes it easier to manage services, since there is only one
command to memorize for the majority of service-handling tasks:
systemctl. It uses the “command plus
subcommand” notation like git or
zypper:
systemctl GENERAL OPTIONS SUBCOMMAND SUBCOMMAND OPTIONS
See man 1 systemctl for a complete manual.
If the output goes to a terminal (and not to a pipe or a file, for example)
systemd commands send long output to a pager by default. Use the
--no-pager option to turn off paging mode.
systemd also supports bash-completion, allowing you to enter the first
letters of a subcommand and then press →| to
automatically complete it. This feature is only available in the
bash shell and requires the installation of the
package bash-completion.
Subcommands for managing services are the same as for managing a service
with System V init (start, stop,
...). The general syntax for service management commands is as follows:
systemctl reload|restart|start|status|stop|... MY_SERVICE(S)
rcMY_SERVICE(S) reload|restart|start|status|stop|...
systemd allows you to manage several services in one go. Instead of executing init scripts one after the other as with System V init, execute a command like the following:
tux >sudosystemctl start MY_1ST_SERVICE MY_2ND_SERVICE
To list all services available on the system:
tux >sudosystemctl list-unit-files --type=service
The following table lists the most important service management commands for systemd and System V init:
|
Task |
systemd Command |
System V init Command |
|---|---|---|
|
Starting. |
start |
start |
|
Stopping. |
stop |
stop |
|
Restarting. Shuts down services and starts them afterward. If a service is not yet running it will be started. |
restart |
restart |
|
Restarting conditionally. Restarts services if they are currently running. Does nothing for services that are not running. |
try-restart |
try-restart |
|
Reloading.
Tells services to reload their configuration files without
interrupting operation. Use case: Tell Apache to reload a modified
|
reload |
reload |
|
Reloading or restarting. Reloads services if reloading is supported, otherwise restarts them. If a service is not yet running it will be started. |
reload-or-restart |
n/a |
|
Reloading or restarting conditionally. Reloads services if reloading is supported, otherwise restarts them if currently running. Does nothing for services that are not running. |
reload-or-try-restart |
n/a |
|
Getting detailed status information.
Lists information about the status of services. The |
status |
status |
|
Getting short status information. Shows whether services are active or not. |
is-active |
status |
The service management commands mentioned in the previous section let you manipulate services for the current session. systemd also lets you permanently enable or disable services, so they are automatically started when requested or are always unavailable. You can either do this by using YaST, or on the command line.
The following table lists enabling and disabling commands for systemd and System V init:
When enabling a service on the command line, it is not started
automatically. It is scheduled to be started with the next system
start-up or runlevel/target change. To immediately start a service after
having enabled it, explicitly run systemctl start
MY_SERVICE or rc
MY_SERVICE start.
|
Task |
|
System V init Command |
|---|---|---|
|
Enabling. |
|
|
|
Disabling. |
|
|
|
Checking. Shows whether a service is enabled or not. |
|
|
|
Re-enabling. Similar to restarting a service, this command first disables and then enables a service. Useful to re-enable a service with its defaults. |
|
n/a |
|
Masking. After “disabling” a service, it can still be started manually. To completely disable a service, you need to mask it. Use with care. |
|
n/a |
|
Unmasking. A service that has been masked can only be used again after it has been unmasked. |
|
n/a |
The entire process of starting the system and shutting it down is maintained by systemd. From this point of view, the kernel can be considered a background process to maintain all other processes and adjust CPU time and hardware access according to requests from other programs.
With System V init the system was booted into a so-called
“Runlevel”. A runlevel defines how the system is started and
what services are available in the running system. Runlevels are numbered;
the most commonly known ones are 0 (shutting down the
system), 3 (multiuser with network) and
5 (multiuser with network and display manager).
systemd introduces a new concept by using so-called “target
units”. However, it remains fully compatible with the runlevel
concept. Target units are named rather than numbered and serve specific
purposes. For example, the targets local-fs.target
and swap.target mount local file systems and swap
spaces.
The target graphical.target provides a multiuser
system with network and display manager capabilities and is equivalent to
runlevel 5. Complex targets, such as
graphical.target act as “meta”
targets by combining a subset of other targets. Since systemd makes it easy
to create custom targets by combining existing targets, it offers great
flexibility.
The following list shows the most important systemd target units. For a
full list refer to man 7 systemd.special.
default.target
The target that is booted by default. Not a “real” target,
but rather a symbolic link to another target like
graphic.target. Can be permanently changed via
YaST (see Section 10.4, “Managing Services with YaST”). To change it for
a session, use the kernel parameter
systemd.unit=MY_TARGET.target
at the boot prompt.
emergency.target
Starts an emergency shell on the console. Only use it at the boot prompt
as systemd.unit=emergency.target.
graphical.target
Starts a system with network, multiuser support and a display manager.
halt.target
Shuts down the system.
mail-transfer-agent.target
Starts all services necessary for sending and receiving mails.
multi-user.target
Starts a multiuser system with network.
reboot.target
Reboots the system.
rescue.target
Starts a single-user system without network.
To remain compatible with the System V init runlevel system, systemd
provides special targets named
runlevelX.target mapping the
corresponding runlevels numbered X.
If you want to know the current target, use the command: systemctl
get-default
systemd Target Units #|
System V runlevel |
|
Purpose |
|---|---|---|
|
0 |
|
System shutdown |
|
1, S |
|
Single-user mode |
|
2 |
|
Local multiuser without remote network |
|
3 |
|
Full multiuser with network |
|
4 |
|
Unused/User-defined |
|
5 |
|
Full multiuser with network and display manager |
|
6 |
|
System reboot |
/etc/inittab
The runlevels in a System V init system are configured in
/etc/inittab. systemd does not
use this configuration. Refer to
Section 10.5.3, “Creating Custom Targets” for instructions on how
to create your own bootable target.
Use the following commands to operate with target units:
|
Task |
systemd Command |
System V init Command |
|---|---|---|
|
Change the current target/runlevel |
|
|
|
Change to the default target/runlevel |
|
n/a |
|
Get the current target/runlevel |
With systemd there is usually more than one active target. The command lists all currently active targets. |
or
|
|
persistently change the default runlevel |
Use the Services Manager or run the following command:
|
Use the Services Manager or change the line
in |
|
Change the default runlevel for the current boot process |
Enter the following option at the boot prompt
|
Enter the desired runlevel number at the boot prompt. |
|
Show a target's/runlevel's dependencies |
“Requires” lists the hard dependencies (the ones that must be resolved), whereas “Wants” lists the soft dependencies (the ones that get resolved if possible). |
n/a |
systemd offers the means to analyze the system start-up process. You can
review the list of all services and their status (rather than having to
parse /varlog/). systemd also allows you to scan the
start-up procedure to find out how much time each service start-up
consumes.
To review the complete list of services that have been started since
booting the system, enter the command systemctl. It
lists all active services like shown below (shortened). To get more
information on a specific service, use systemctl status
MY_SERVICE.
root # systemctl
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
[...]
iscsi.service loaded active exited Login and scanning of iSC+
kmod-static-nodes.service loaded active exited Create list of required s+
libvirtd.service loaded active running Virtualization daemon
nscd.service loaded active running Name Service Cache Daemon
chronyd.service loaded active running NTP Server Daemon
polkit.service loaded active running Authorization Manager
postfix.service loaded active running Postfix Mail Transport Ag+
rc-local.service loaded active exited /etc/init.d/boot.local Co+
rsyslog.service loaded active running System Logging Service
[...]
LOAD = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB = The low-level unit activation state, values depend on unit type.
161 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
To restrict the output to services that failed to start, use the
--failed option:
root # systemctl --failed
UNIT LOAD ACTIVE SUB JOB DESCRIPTION
apache2.service loaded failed failed apache
NetworkManager.service loaded failed failed Network Manager
plymouth-start.service loaded failed failed Show Plymouth Boot Screen
[...]
To debug system start-up time, systemd offers the
systemd-analyze command. It shows the total start-up
time, a list of services ordered by start-up time and can also generate an
SVG graphic showing the time services took to start in relation to the
other services.
root # systemd-analyze
Startup finished in 2666ms (kernel) + 21961ms (userspace) = 24628msroot # systemd-analyze blame
6472ms systemd-modules-load.service
5833ms remount-rootfs.service
4597ms network.service
4254ms systemd-vconsole-setup.service
4096ms postfix.service
2998ms xdm.service
2483ms localnet.service
2470ms SuSEfirewall2_init.service
2189ms avahi-daemon.service
2120ms systemd-logind.service
1080ms chronyd.service
[...]
75ms fbset.service
72ms purge-kernels.service
47ms dev-vda1.swap
38ms bluez-coldplug.service
35ms splash_early.serviceroot # systemd-analyze plot > jupiter.example.com-startup.svg
The above-mentioned commands let you review the services that started and
the time it took to start them. If you need to know more details, you can
tell systemd to verbosely log the complete start-up procedure by
entering the following parameters at the boot prompt:
systemd.log_level=debug systemd.log_target=kmsg
Now systemd writes its log messages into the kernel ring buffer. View
that buffer with dmesg:
tux > dmesg -T | less
systemd is compatible with System V, allowing you to still use existing
System V init scripts. However, there is at least one known issue where a
System V init script does not work with systemd out of the box: starting a
service as a different user via su or
sudo in init scripts will result in a failure of the
script, producing an “Access denied” error.
When changing the user with su or
sudo, a PAM session is started. This session will be
terminated after the init script is finished. As a consequence, the service
that has been started by the init script will also be terminated. To work
around this error, proceed as follows:
Create a service file wrapper with the same name as the init script plus
the file name extension .service:
[Unit] Description=DESCRIPTION After=network.target [Service] User=USER Type=forking1 PIDFile=PATH TO PID FILE1 ExecStart=PATH TO INIT SCRIPT start ExecStop=PATH TO INIT SCRIPT stop ExecStopPost=/usr/bin/rm -f PATH TO PID FILE1 [Install] WantedBy=multi-user.target2
Replace all values written in UPPERCASE LETTERS with appropriate values.
Start the daemon with systemctl start
APPLICATION.
Basic service management can also be done with the YaST Services Manager module. It supports starting, stopping, enabling and disabling services. It also lets you show a service's status and change the default target. Start the YaST module with › › .
To change the target the system boots into, choose a target from the drop-down box. The most often used targets are (starting a graphical login screen) and (starting the system in command line mode).
Select a service from the table. The column shows whether it is currently running () or not (). Toggle its status by choosing .
Starting or stopping a service changes its status for the currently running session. To change its status throughout a reboot, you need to enable or disable it.
Select a service from the table. The column shows whether it is currently or . Toggle its status by choosing .
By enabling or disabling a service you configure whether it is started during booting () or not (). This setting will not affect the current session. To change its status in the current session, you need to start or stop it.
To view the status message of a service, select it from the list and
choose . The output you will see is
identical to the one generated by the command
systemctl -l status
MY_SERVICE.
Faulty runlevel settings may make your system unusable. Before applying your changes, make absolutely sure that you know their consequences.
systemd #
The following sections contain some examples for
systemd customization.
Always do systemd customization in /etc/systemd/,
never in /usr/lib/systemd/.
Otherwise your changes will be overwritten by the next update of systemd.
The systemd unit files are located in
/usr/lib/systemd/system. If you want to customize
them, proceed as follows:
Copy the files you want to modify from
/usr/lib/systemd/system to
/etc/systemd/system. Keep the file names identical
to the original ones.
Modify the copies in /etc/systemd/system according
to your needs.
For an overview of your configuration changes, use the
systemd-delta command. It can compare and identify
configuration files that override other configuration files. For details,
refer to the systemd-delta man page.
The modified files in /etc/systemd will take
precedence over the original files in
/usr/lib/systemd/system, provided that their file name
is the same.
xinetd Services to systemd #
Since the release of openSUSE Leap 15, the xinetd
infrastructure has been removed. This section outlines how to convert
existing custom xinetd service files to systemd
sockets.
For each xinetd service file, you need at least
two systemd unit files: the socket file (*.socket)
and an associated service file (*.service). The
socket file tells systemd which socket to create, and the service file
tells what executable to start.
Consider the following example xinetd service
file:
root # cat /etc/xinetd.d/example
service example
{
socket_type = stream
protocol = tcp
port = 10085
wait = no
user = user
group = users
groups = yes
server = /usr/libexec/example/exampled
server_args = -auth=bsdtcp exampledump
disable = no
}
To convert it to systemd, you need the following two matching files:
root # cat /usr/lib/systemd/system/example.socket
[Socket]
ListenStream=0.0.0.0:10085
Accept=false
[Install]
WantedBy=sockets.targetroot # cat /usr/lib/systemd/system/example.service
[Unit]
Description=example
[Service]
ExecStart=/usr/libexec/example/exampled -auth=bsdtcp exampledump
User=user
Group=users
StandardInput=socket
For a complete list of the systemd 'socket' and 'service' file options,
refer to systemd.socket and systemd.service manual pages (man 5
systemd.socket, man 5 systemd.service).
If you only want to add a few lines to a configuration file or modify a small part of it, you can use so-called “drop-in” files. Drop-in files let you extend the configuration of unit files without having to edit or override the unit files themselves.
For example, to change one value for the FOOBAR
service located in
/usr/lib/systemd/system/FOOBAR.SERVICE,
proceed as follows:
Create a directory called
/etc/systemd/system/MY_SERVICE.service.d/.
Note the .d suffix. The directory must otherwise be
named like the service that you want to patch with the drop-in file.
In that directory, create a file
WHATEVERMODIFICATION.conf.
Make sure it only contains the line with the value that you want to modify.
Save your changes to the file. It will be used as an extension of the original file.
On System V init SUSE systems, runlevel 4 is unused to allow
administrators to create their own runlevel configuration. systemd allows
you to create any number of custom targets. It is suggested to start by
adapting an existing target such as
graphical.target.
Copy the configuration file
/usr/lib/systemd/system/graphical.target to
/etc/systemd/system/MY_TARGET.target
and adjust it according to your needs.
The configuration file copied in the previous step already covers the
required (“hard”) dependencies for the target. To also cover
the wanted (“soft”) dependencies, create a directory
/etc/systemd/system/MY_TARGET.target.wants.
For each wanted service, create a symbolic link from
/usr/lib/systemd/system into
/etc/systemd/system/MY_TARGET.target.wants.
Once you have finished setting up the target, reload the systemd configuration to make the new target available:
tux >sudosystemctl daemon-reload
The following sections cover advanced topics for system administrators. For even more advanced systemd documentation, refer to Lennart Pöttering's series about systemd for administrators at http://0pointer.de/blog/projects.
systemd supports cleaning temporary directories regularly. The
configuration from the previous system version is automatically migrated
and active. tmpfiles.d—which is responsible for
managing temporary files—reads its configuration from
/etc/tmpfiles.d/*.conf ,
/run/tmpfiles.d/*.conf, and
/usr/lib/tmpfiles.d/*.conf files. Configuration placed
in /etc/tmpfiles.d/*.conf overrides related
configurations from the other two directories
(/usr/lib/tmpfiles.d/*.conf is where packages store
their configuration files).
The configuration format is one line per path containing action and path, and optionally mode, ownership, age and argument fields, depending on the action. The following example unlinks the X11 lock files:
Type Path Mode UID GID Age Argument r /tmp/.X[0-9]*-lock
To get the status the tmpfile timer:
tux >sudosystemctl status systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static) Active: active (waiting) since Tue 2014-09-09 15:30:36 CEST; 1 weeks 6 days ago Docs: man:tmpfiles.d(5) man:systemd-tmpfiles(8) Sep 09 15:30:36 jupiter systemd[1]: Starting Daily Cleanup of Temporary Directories. Sep 09 15:30:36 jupiter systemd[1]: Started Daily Cleanup of Temporary Directories.
For more information on temporary files handling, see man 5
tmpfiles.d.
Section 10.6.8, “Debugging Services” explains how
to view log messages for a given service. However, displaying log messages
is not restricted to service logs. You can also access and query the
complete log messages written by systemd—the so-called
“Journal”. Use the command
systemd-journalctl to display the complete log messages
starting with the oldest entries. Refer to man 1
systemd-journalctl for options such as applying filters or
changing the output format.
You can save the current state of systemd to a named snapshot and later
revert to it with the isolate subcommand. This is useful
when testing services or custom targets, because it allows you to return to
a defined state at any time. A snapshot is only available in the current
session and will automatically be deleted on reboot. A snapshot name must
end in .snapshot.
tux >sudosystemctl snapshot MY_SNAPSHOT.snapshot
tux >sudosystemctl delete MY_SNAPSHOT.snapshot
tux >sudosystemctl show MY_SNAPSHOT.snapshot
tux >sudosystemctl isolate MY_SNAPSHOT.snapshot
With systemd, kernel modules can automatically be loaded at boot time via
a configuration file in /etc/modules-load.d. The file
should be named MODULE.conf and have the
following content:
# load module MODULE at boot time MODULE
In case a package installs a configuration file for loading a kernel
module, the file gets installed to
/usr/lib/modules-load.d. If two configuration files
with the same name exist, the one in
/etc/modules-load.d tales precedence.
For more information, see the modules-load.d(5)
man page.
With System V init actions that need to be performed before loading a
service, needed to be specified in /etc/init.d/before.local
. This procedure is no longer supported with systemd. If you
need to do actions before starting services, do the following:
Create a drop-in file in /etc/modules-load.d
directory (see man modules-load.d for the syntax)
Create a drop-in file in /etc/tmpfiles.d (see
man tmpfiles.d for the syntax)
Create a system service file, for example
/etc/systemd/system/before.service, from the
following template:
[Unit] Before=NAME OF THE SERVICE YOU WANT THIS SERVICE TO BE STARTED BEFORE [Service] Type=oneshot RemainAfterExit=true ExecStart=YOUR_COMMAND # beware, executable is run directly, not through a shell, check the man pages # systemd.service and systemd.unit for full syntax [Install] # target in which to start the service WantedBy=multi-user.target #WantedBy=graphical.target
When the service file is created, you should run the following commands
(as root):
tux >sudosystemctl daemon-reloadtux >sudosystemctl enable before
Every time you modify the service file, you need to run:
tux >sudosystemctl daemon-reload
On a traditional System V init system it is not always possible to clearly assign a process to the service that spawned it. Some services, such as Apache, spawn a lot of third-party processes (for example CGI or Java processes), which themselves spawn more processes. This makes a clear assignment difficult or even impossible. Additionally, a service may not terminate correctly, leaving some children alive.
systemd solves this problem by placing each service into its own cgroup. cgroups are a kernel feature that allows aggregating processes and all their children into hierarchical organized groups. systemd names each cgroup after its service. Since a non-privileged process is not allowed to “leave” its cgroup, this provides an effective way to label all processes spawned by a service with the name of the service.
To list all processes belonging to a service, use the command
systemd-cgls. The result will look like the following
(shortened) example:
root # systemd-cgls --no-pager
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
├─user.slice
│ └─user-1000.slice
│ ├─session-102.scope
│ │ ├─12426 gdm-session-worker [pam/gdm-password]
│ │ ├─15831 gdm-session-worker [pam/gdm-password]
│ │ ├─15839 gdm-session-worker [pam/gdm-password]
│ │ ├─15858 /usr/lib/gnome-terminal-server
[...]
└─system.slice
├─systemd-hostnamed.service
│ └─17616 /usr/lib/systemd/systemd-hostnamed
├─cron.service
│ └─1689 /usr/sbin/cron -n
├─postfix.service
│ ├─ 1676 /usr/lib/postfix/master -w
│ ├─ 1679 qmgr -l -t fifo -u
│ └─15590 pickup -l -t fifo -u
├─sshd.service
│ └─1436 /usr/sbin/sshd -D
[...]See Chapter 9, Kernel Control Groups for more information about cgroups.
As explained in Section 10.6.6, “Kernel Control Groups (cgroups)”, it is not always possible to assign a process to its parent service process in a System V init system. This makes it difficult to terminate a service and all of its children. Child processes that have not been terminated will remain as zombie processes.
systemd's concept of confining each service into a cgroup makes it possible
to clearly identify all child processes of a service and therefore allows
you to send a signal to each of these processes. Use systemctl
kill to send signals to services. For a list of available signals
refer to man 7 signals.
SIGTERM to a Service
SIGTERM is the default signal that is sent.
tux >sudosystemctl kill MY_SERVICE
Use the -s option to specify the signal that should be
sent.
tux >sudosystemctl kill -s SIGNAL MY_SERVICE
By default the kill command sends the signal to
all processes of the specified cgroup. You can restrict
it to the control or the main process.
The latter is for example useful to force a service to reload its
configuration by sending SIGHUP:
tux >sudosystemctl kill -s SIGHUP --kill-who=main MY_SERVICE
The D-Bus service is the message bus for communication between systemd
clients and the systemd manager that is running as pid 1. Even though
dbus is a stand-alone daemon, it
is an integral part of the init infrastructure.
Terminating dbus or
restarting it in the running system is similar to an attempt to terminate
or restart pid 1. It will break systemd client/server communication and
make most systemd functions unusable.
Therefore, terminating or restarting
dbus is neither recommended
nor supported.
By default, systemd is not overly verbose. If a service was started
successfully, no output will be produced. In case of a failure, a short
error message will be displayed. However, systemctl
status provides means to debug start-up and operation of a
service.
systemd comes with its own logging mechanism (“The Journal”)
that logs system messages. This allows you to display the service messages
together with status messages. The status command works
similar to tail and can also display the log messages in
different formats, making it a powerful debugging tool.
Whenever a service fails to start, use systemctl status
MY_SERVICE to get a detailed error
message:
root #systemctl start apache2 Job failed. See system journal and 'systemctl status' for details.root #systemctl status apache2 Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled) Active: failed (Result: exit-code) since Mon, 04 Jun 2012 16:52:26 +0200; 29s ago Process: 3088 ExecStart=/usr/sbin/start_apache2 -D SYSTEMD -k start (code=exited, status=1/FAILURE) CGroup: name=systemd:/system/apache2.service Jun 04 16:52:26 g144 start_apache2[3088]: httpd2-prefork: Syntax error on line 205 of /etc/apache2/httpd.conf: Syntax error on li...alHost>
The default behavior of the status subcommand is to
display the last ten messages a service issued. To change the number of
messages to show, use the
--lines=N parameter:
tux >sudosystemctl status chronydtux >sudosystemctl --lines=20 status chronyd
To display a “live stream” of service messages, use the
--follow option, which works like
tail -f:
tux >sudosystemctl --follow status chronyd
The --output=MODE parameter
allows you to change the output format of service messages. The most
important modes available are:
short
The default format. Shows the log messages with a human readable time stamp.
verbose
Full output with all fields.
cat
Terse output without time stamps.
For more information on systemd refer to the following online resources:
Lennart Pöttering, one of the systemd authors, has written a series of blog entries (13 at the time of writing this chapter). Find them at http://0pointer.de/blog/projects.
journalctl: Query the systemd Journal #
When systemd replaced traditional init scripts in openSUSE Leap
(see Chapter 10, The systemd Daemon), it introduced its own logging system
called journal. There is no need to run a
syslog based service anymore, as all system events
are written in the journal.
The journal itself is a system service managed by systemd. Its full name is
systemd-journald.service. It collects and stores logging
data by maintaining structured indexed journals based on logging information
received from the kernel, user processes, standard input, and system service errors. The systemd-journald service is on
by default:
tux >sudosystemctl status systemd-journald systemd-journald.service - Journal Service Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static) Active: active (running) since Mon 2014-05-26 08:36:59 EDT; 3 days ago Docs: man:systemd-journald.service(8) man:journald.conf(5) Main PID: 413 (systemd-journal) Status: "Processing requests..." CGroup: /system.slice/systemd-journald.service └─413 /usr/lib/systemd/systemd-journald [...]
The journal stores log data in /run/log/journal/ by
default. Because the /run/ directory is volatile by
nature, log data is lost at reboot. To make the log data persistent, the
directory /var/log/journal/ with correct ownership and
permissions must exist, where the systemd-journald service can store its
data. systemd will create the directory for you—and switch to
persistent logging—if you do the following:
As root, open /etc/systemd/journald.conf for
editing.
root # vi /etc/systemd/journald.conf
Uncomment the line containing Storage= and change it to
[...] [Journal] Storage=persistent #Compress=yes [...]
Save the file and restart systemd-journald:
root # systemctl restart systemd-journaldjournalctl Useful Switches #
This section introduces several common useful options to enhance the default
journalctl behavior. All switches are described in the
journalctl manual page, man 1
journalctl.
To show all journal messages related to a specific executable, specify the full path to the executable:
tux >sudojournalctl /usr/lib/systemd/systemd
Shows only the most recent journal messages, and prints new log entries as they are added to the journal.
Prints the messages and jumps to the end of the journal, so that the latest entries are visible within the pager.
Prints the messages of the journal in reverse order, so that the latest entries are listed first.
Shows only kernel messages. This is equivalent to the field match
_TRANSPORT=kernel (see
Section 11.3.3, “Filtering Based on Fields”).
Shows only messages for the specified systemd unit. This is equivalent
to the field match
_SYSTEMD_UNIT=UNIT (see
Section 11.3.3, “Filtering Based on Fields”).
tux >sudojournalctl -u apache2 [...] Jun 03 10:07:11 pinkiepie systemd[1]: Starting The Apache Webserver... Jun 03 10:07:12 pinkiepie systemd[1]: Started The Apache Webserver.
When called without switches, journalctl shows the full
content of the journal, the oldest entries listed first. The output can be
filtered by specific switches and fields.
journalctl can filter messages based on a specific
system boot. To list all available boots, run
tux >sudojournalctl --list-boots -1 097ed2cd99124a2391d2cffab1b566f0 Mon 2014-05-26 08:36:56 EDT—Fri 2014-05-30 05:33:44 EDT 0 156019a44a774a0bb0148a92df4af81b Fri 2014-05-30 05:34:09 EDT—Fri 2014-05-30 06:15:01 EDT
The first column lists the boot offset: 0 for the
current boot, -1 for the previous one,
-2 for the one prior to that, etc. The second column
contains the boot ID followed by the limiting time stamps of the specific
boot.
Show all messages from the current boot:
tux >sudojournalctl -b
If you need to see journal messages from the previous boot, add an offset parameter. The following example outputs the previous boot messages:
tux >sudojournalctl -b -1
Another way is to list boot messages based on the boot ID. For this purpose, use the _BOOT_ID field:
tux >sudojournalctl _BOOT_ID=156019a44a774a0bb0148a92df4af81b
You can filter the output of journalctl by specifying
the starting and/or ending date. The date specification should be of the
format "2014-06-30 9:17:16". If the time part is omitted, midnight is
assumed. If seconds are omitted, ":00" is assumed. If the date part is
omitted, the current day is assumed. Instead of numeric expression, you can
specify the keywords "yesterday", "today", or "tomorrow". They refer to
midnight of the day before the current day, of the current day, or of the
day after the current day. If you specify "now", it refers to the current
time. You can also specify relative times prefixed with
- or +, referring to times before or
after the current time.
Show only new messages since now, and update the output continuously:
tux >sudojournalctl --since "now" -f
Show all messages since last midnight till 3:20am:
tux >sudojournalctl --since "today" --until "3:20"
You can filter the output of the journal by specific fields. The syntax of
a field to be matched is FIELD_NAME=MATCHED_VALUE, such
as _SYSTEMD_UNIT=httpd.service. You can specify multiple
matches in a single query to filter the output messages even more. See
man 7 systemd.journal-fields for a list of default
fields.
Show messages produced by a specific process ID:
tux >sudojournalctl _PID=1039
Show messages belonging to a specific user ID:
# journalctl _UID=1000
Show messages from the kernel ring buffer (the same as
dmesg produces):
tux >sudojournalctl _TRANSPORT=kernel
Show messages from the service's standard or error output:
tux >sudojournalctl _TRANSPORT=stdout
Show messages produced by a specified service only:
tux >sudojournalctl _SYSTEMD_UNIT=avahi-daemon.service
If two different fields are specified, only entries that match both expressions at the same time are shown:
tux >sudojournalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1488
If two matches refer to the same field, all entries matching either expression are shown:
tux >sudojournalctl _SYSTEMD_UNIT=avahi-daemon.service _SYSTEMD_UNIT=dbus.service
You can use the '+' separator to combine two expressions in a logical 'OR'. The following example shows all messages from the Avahi service process with the process ID 1480 together with all messages from the D-Bus service:
tux >sudojournalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1480 + _SYSTEMD_UNIT=dbus.service
systemd Errors #
This section introduces a simple example to illustrate how to find and fix
the error reported by systemd during apache2 start-up.
Try to start the apache2 service:
# systemctl start apache2 Job for apache2.service failed. See 'systemctl status apache2' and 'journalctl -xn' for details.
Let us see what the service's status says:
tux >sudosystemctl status apache2 apache2.service - The Apache Webserver Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled) Active: failed (Result: exit-code) since Tue 2014-06-03 11:08:13 CEST; 7min ago Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND \ -k graceful-stop (code=exited, status=1/FAILURE)
The ID of the process causing the failure is 11026.
Show the verbose version of messages related to process ID 11026:
tux >sudojournalctl -o verbose _PID=11026 [...] MESSAGE=AH00526: Syntax error on line 6 of /etc/apache2/default-server.conf: [...] MESSAGE=Invalid command 'DocumenttRoot', perhaps misspelled or defined by a module [...]
Fix the typo inside /etc/apache2/default-server.conf,
start the apache2 service, and print its status:
tux >sudosystemctl start apache2 && systemctl status apache2 apache2.service - The Apache Webserver Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled) Active: active (running) since Tue 2014-06-03 11:26:24 CEST; 4ms ago Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND -k graceful-stop (code=exited, status=1/FAILURE) Main PID: 11263 (httpd2-prefork) Status: "Processing requests..." CGroup: /system.slice/apache2.service ├─11263 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11280 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11281 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11282 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] ├─11283 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...] └─11285 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
The behavior of the systemd-journald service can be adjusted by modifying
/etc/systemd/journald.conf. This section introduces
only basic option settings. For a complete file description, see
man 5 journald.conf. Note that you need to restart the
journal for the changes to take effect with
tux >sudosystemctl restart systemd-journald
If the journal log data is saved to a persistent location (see
Section 11.1, “Making the Journal Persistent”), it uses up to 10% of the file
system the /var/log/journal resides on. For example,
if /var/log/journal is located on a 30 GB
/var partition, the journal may use up to 3 GB of the
disk space. To change this limit, change (and uncomment) the
SystemMaxUse option:
SystemMaxUse=50M
/dev/ttyX #
You can forward the journal to a terminal device to inform you about system
messages on a preferred terminal screen, for example
/dev/tty12. Change the following journald options to
ForwardToConsole=yes TTYPath=/dev/tty12
Journald is backward compatible with traditional syslog implementations
such as rsyslog. Make sure the following is valid:
rsyslog is installed.
tux >sudorpm -q rsyslog rsyslog-7.4.8-2.16.x86_64
rsyslog service is enabled.
tux >sudosystemctl is-enabled rsyslog enabled
Forwarding to syslog is enabled in
/etc/systemd/journald.conf.
ForwardToSyslog=yes
systemd Journal #
For an easy way of filtering the systemd journal (without having to deal
with the journalctl syntax), you can use the YaST journal module. After
installing it with sudo zypper in yast2-journal, start it
from YaST by selecting › . Alternatively, start it
from command line by entering sudo yast2 journal.
The module displays the log entries in a table. The search box on top allows
you to search for entries that contain certain characters, similar to using
grep. To filter the entries by date and time, unit, file,
or priority, click and set the respective
options.
This chapter describes how to configure GRUB 2, the boot loader used in openSUSE® Leap. It is the successor to the traditional GRUB boot loader—now called “GRUB Legacy”. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 9, Introduction to the Booting Process. For details on Secure Boot support for UEFI machines, see Chapter 14, UEFI (Unified Extensible Firmware Interface).
The configuration is stored in different files.
More file systems are supported (for example, Btrfs).
Can directly read files stored on LVM or RAID devices.
The user interface can be translated and altered with themes.
Includes a mechanism for loading modules to support additional features, such as file systems, etc.
Automatically searches for and generates boot entries for other kernels and operating systems, such as Windows.
Includes a minimal Bash-like console.
The configuration of GRUB 2 is based on the following files:
/boot/grub2/grub.cfg
This file contains the configuration of the GRUB 2 menu items. It
replaces menu.lst used in GRUB Legacy.
grub.cfg is automatically generated by the
grub2-mkconfig
command, and should not be edited.
/boot/grub2/custom.cfg
This optional file is directly sourced by grub.cfg
at boot time and can be used to add custom items to the boot menu.
Starting with openSUSE Leap Leap 42.2 these entries will also
be parsed when using grub-once.
/etc/default/grub
This file controls the user settings of GRUB 2 and usually includes additional environmental settings such as backgrounds and themes.
/etc/grub.d/
The scripts in this directory are read during execution of the
grub2-mkconfig
command. Their instructions are
integrated into the main configuration file
/boot/grub/grub.cfg.
/etc/sysconfig/bootloader
This configuration file is used when configuring the boot loader with
YaST and every time a new kernel is installed. It is evaluated by the
perl-bootloader which modifies the boot loader configuration file (for
example /boot/grub2/grub.cfg for GRUB 2)
accordingly. /etc/sysconfig/bootloader is not a
GRUB 2-specific configuration file—the values are applied to any
boot loader installed on openSUSE Leap.
/boot/grub2/x86_64-efi,
,
These configuration files contain architecture-specific options.
GRUB 2 can be controlled in various ways. Boot entries from an existing
configuration can be selected from the graphical menu (splash screen). The
configuration is loaded from the file
/boot/grub2/grub.cfg which is compiled from other
configuration files (see below). All GRUB 2 configuration files are
considered system files, and you need root privileges to edit them.
After having manually edited GRUB 2 configuration files, you need to run
grub2-mkconfig
to activate the changes. However, this
is not necessary when changing the configuration with YaST, since it will
automatically run grub2-mkconfig
.
/boot/grub2/grub.cfg #
The graphical splash screen with the boot menu is based on the GRUB 2
configuration file /boot/grub2/grub.cfg, which
contains information about all partitions or operating systems that can be
booted by the menu.
Every time the system is booted, GRUB 2 loads the menu file directly from
the file system. For this reason, GRUB 2 does not need to be re-installed
after changes to the configuration file. grub.cfg is
automatically rebuilt with kernel installations or removals.
grub.cfg is compiled by the
grub2-mkconfig
from the file
/etc/default/grub and scripts found in the
/etc/grub.d/ directory. Therefore you should never
edit the file manually. Instead, edit the related source files or use the
YaST module to modify the configuration as
described in Section 12.3, “Configuring the Boot Loader with YaST”.
/etc/default/grub #More general options of GRUB 2 belong here, such as the time the menu is displayed, or the default OS to boot. To list all available options, see the output of the following command:
tux > grep "export GRUB_DEFAULT" -A50 /usr/sbin/grub2-mkconfig | grep GRUB_
In addition to already defined variables, the user may introduce their own
variables, and use them later in the scripts found in the
/etc/grub.d directory.
After having edited /etc/default/grub, run
grub2-mkconfig
to update the main configuration file.
All options set in this file are general options that affect all boot entries. Specific options for Xen kernels or the Xen hypervisor can be set via the GRUB_*_XEN_* configuration options. See below for details.
GRUB_DEFAULT
Sets the boot menu entry that is booted by default. Its value can be a numeric value, the complete name of a menu entry, or “saved”.
GRUB_DEFAULT=2 boots the third (counted from zero)
boot menu entry.
GRUB_DEFAULT="2>0" boots the first submenu entry
of the third top-level menu entry.
GRUB_DEFAULT="Example boot menu entry" boots the menu
entry with the title “Example boot menu entry”.
GRUB_DEFAULT=saved boots the entry specified by the
grub2-once or grub2-set-default
commands. While grub2-reboot sets the
default boot entry for the next reboot only,
grub2-set-default sets the default boot entry until
changed. grub2-editenv list lists next boot entry.
GRUB_HIDDEN_TIMEOUT
Waits the specified number of seconds for the user to press a key.
During the period no menu is shown unless the user presses a key. If no
key is pressed during the time specified, the control is passed to
GRUB_TIMEOUT.
GRUB_HIDDEN_TIMEOUT=0 first checks whether
Shift is pressed and shows the boot menu if yes,
otherwise immediately boots the default menu entry. This is the default
when only one bootable OS is identified by GRUB 2.
GRUB_HIDDEN_TIMEOUT_QUIET
If false is specified, a countdown timer is displayed
on a blank screen when the GRUB_HIDDEN_TIMEOUT
feature is active.
GRUB_TIMEOUT
Time period in seconds the boot menu is displayed before automatically
booting the default boot entry. If you press a key, the timeout is
cancelled and GRUB 2 waits for you to make the selection manually.
GRUB_TIMEOUT=-1 will cause the menu to be displayed
until you select the boot entry manually.
GRUB_CMDLINE_LINUX
Entries on this line are added at the end of the boot entries for normal and recovery mode. Use it to add kernel parameters to the boot entry.
GRUB_CMDLINE_LINUX_DEFAULT
Same as GRUB_CMDLINE_LINUX but the entries are
appended in the normal mode only.
GRUB_CMDLINE_LINUX_RECOVERY
Same as GRUB_CMDLINE_LINUX but the entries are
appended in the recovery mode only.
GRUB_CMDLINE_LINUX_XEN_REPLACE
This entry will completely replace the
GRUB_CMDLINE_LINUX parameters for all Xen boot
entries.
GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT
Same as GRUB_CMDLINE_LINUX_XEN_REPLACE but it will
only replace parameters ofGRUB_CMDLINE_LINUX_DEFAULT.
GRUB_CMDLINE_XEN
This entry specifies the kernel parameters for the Xen guest kernel
only—the operation principle is the same as for
GRUB_CMDLINE_LINUX.
GRUB_CMDLINE_XEN_DEFAULT
Same as GRUB_CMDLINE_XEN—the operation
principle is the same as for
GRUB_CMDLINE_LINUX_DEFAULT.
GRUB_TERMINAL
Enables and specifies an input/output terminal device. Can be
console (PC BIOS and EFI consoles),
serial (serial terminal),
ofconsole (Open Firmware console), or the default
gfxterm (graphics-mode output). It is also possible
to enable more than one device by quoting the required options, for
example GRUB_TERMINAL="console serial".
GRUB_GFXMODE
The resolution used for the gfxterm graphical
terminal. Note that you can only use modes supported by your graphics
card (VBE). The default is ‘auto’, which tries to select a preferred
resolution. You can display the screen resolutions available to GRUB 2
by typing videoinfo in the GRUB 2 command line. The
command line is accessed by typing C when the GRUB 2
boot menu screen is displayed.
You can also specify a color depth by appending it to the resolution
setting, for example GRUB_GFXMODE=1280x1024x24.
GRUB_BACKGROUND
Set a background image for the gfxterm graphical
terminal. The image must be a file readable by GRUB 2 at boot time, and
it must end with the .png, .tga,
.jpg, or .jpeg suffix. If
necessary, the image will be scaled to fit the screen.
GRUB_DISABLE_OS_PROBER
If this option is set to true, automatic searching
for other operating systems is disabled. Only the kernel images in
/boot/ and the options from your own scripts in
/etc/grub.d/ are detected.
SUSE_BTRFS_SNAPSHOT_BOOTING
If this option is set to true, GRUB 2 can boot
directly into Snapper snapshots. For more information, see
Section 3.3, “System Rollback by Booting from Snapshots”.
For a complete list of options, see the GNU GRUB manual. For a complete list of possible parameters, see http://en.opensuse.org/Linuxrc.
/etc/grub.d #
The scripts in this directory are read during execution of the
grub2-mkconfig
command, and their instructions are
incorporated into /boot/grub2/grub.cfg. The order of
menu items in grub.cfg is determined by the order in
which the files in this directory are run. Files with a leading numeral are
executed first, beginning with the lowest number.
00_header is run before 10_linux,
which would run before 40_custom. If files with
alphabetic names are present, they are executed after the numerically-named
files. Only executable files generate output to
grub.cfg during execution of
grub2-mkconfig. By default all files in the
/etc/grub.d directory are executable.
grub.cfg
Because /boot/grub2/grub.cfg is recompiled each time
grub2-mkconfig is run, any custom content is lost.
If you want to insert your lines directly into
/boot/grub2/grub.cfg without losing them after
grub2-mkconfig is run, insert it between
### BEGIN /etc/grub.d/90_persistent ###
and
### END /etc/grub.d/90_persistent ###
lines. The 90_persistent script ensures that such
content will be preserved.
A list of the most important scripts follows:
00_header
Sets environmental variables such as system file locations, display
settings, themes, and previously saved entries. It also imports
preferences stored in the /etc/default/grub.
Normally you do not need to make changes to this file.
10_linux
Identifies Linux kernels on the root device and creates relevant menu entries. This includes the associated recovery mode option if enabled. Only the latest kernel is displayed on the main menu page, with additional kernels included in a submenu.
30_os-prober
This script uses os-prober to search for Linux and
other operating systems and places the results in the GRUB 2 menu. There
are sections to identify specific other operating systems, such as
Windows or macOS.
40_custom
This file provides a simple way to include custom boot entries into
grub.cfg. Make sure that you do not change the
exec tail -n +3 $0 part at the beginning.
The processing sequence is set by the preceding numbers with the lowest number being executed first. If scripts are preceded by the same number the alphabetical order of the complete name decides the order.
/boot/grub2/custom.cfg
If you create /boot/grub2/custom.cfg and fill it with
a custom content, it will be automatically included into
/boot/grub2/grub.cfg at boot time.
In GRUB Legacy, the device.map configuration file was
used to derive Linux device names from BIOS drive numbers. The mapping
between BIOS drives and Linux devices cannot always be guessed correctly.
For example, GRUB Legacy would get a wrong order if the boot sequence of
IDE and SCSI drives is exchanged in the BIOS configuration.
GRUB 2 avoids this problem by using device ID strings (UUIDs) or file
system labels when generating grub.cfg. GRUB 2
utilities create a temporary device map on the fly, which is usually
sufficient, particularly in the case of single-disk systems.
However, if you need to override the GRUB 2's automatic device mapping
mechanism, create your custom mapping file
/boot/grub2/device.map. The following example changes
the mapping to make DISK 3 the boot disk. Note that
GRUB 2 partition numbers start with 1 and not with
0 as in GRUB Legacy.
(hd1) /dev/disk-by-id/DISK3 ID (hd2) /dev/disk-by-id/DISK1 ID (hd3) /dev/disk-by-id/DISK2 ID
Even before the operating system is booted, GRUB 2 enables access to file systems. Users without root permissions can access files in your Linux system to which they have no access after the system is booted. To block this kind of access or to prevent users from booting certain menu entries, set a boot password.
If set, the boot password is required on every boot, which means the system does not boot automatically.
Proceed as follows to set a boot password. Alternatively use YaST ( ).
Encrypt the password using grub2-mkpasswd-pbkdf2:
tux >sudogrub2-mkpasswd-pbkdf2 Password: **** Reenter password: **** PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
Paste the resulting string into the file
/etc/grub.d/40_custom together with the set
superusers command.
set superusers="root" password_pbkdf2 root grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
Run grub2-mkconfig
to import the changes into the
main configuration file.
After you reboot, you will be prompted for a user name and a password when
trying to boot a menu entry. Enter root and the password
you typed during the grub2-mkpasswd-pbkdf2 command. If
the credentials are correct, the system will boot the selected boot entry.
For more information, see https://www.gnu.org/software/grub/manual/grub.html#Security.
The easiest way to configure general options of the boot loader in your openSUSE Leap system is to use the YaST module. In the , select › . The module shows the current boot loader configuration of your system and allows you to make changes.
Use the tab to view and change settings related to type, location and advanced loader settings. You can choose whether to use GRUB 2 in standard or EFI mode.
If you have an EFI system you can only install GRUB2-EFI, otherwise your system is no longer bootable.
To reinstall the boot loader, make sure to change a setting in YaST and then change it back. For example, to reinstall GRUB2-EFI, select first and then immediately switch back to .
Otherwise, the boot loader may only be partially reinstalled.
To use a boot loader other than the ones listed, select . Read the documentation of your boot loader carefully before choosing this option.
The default location of the boot loader depends on the partition setup and
is either the Master Boot Record (MBR) or the boot sector of the
/ partition. To modify the location of the boot loader,
follow these steps:
Select the tab and then choose one of the following options for :
This installs the boot loader in the MBR of the disk containing the
directory /boot. Usually this will be the disk
mounted to /, but if /boot is
mounted to a separate partition on a different disk, the MBR of that
disk will be used.
This installs the boot loader in the boot sector of the
/ partition.
Use this option to specify the location of the boot loader manually.
Click to apply your changes.
The tab includes the following additional options:
Activates the partition that contains the
/boot directory. Use this option on systems with old BIOS and/or
legacy operating systems because they may fail to boot from a non-active
partition. It is safe to leave this option active.
If MBR contains a custom 'non-GRUB' code, this option replaces it with a generic, operating system independent code. If you deactivate this option, the system may become unbootable.
Starts TrustedGRUB2 which supports trusted computing functionality (Trusted Platform Module (TPM)). For more information refer to https://github.com/Sirrix-AG/TrustedGRUB2.
If your computer has more than one hard disk, you can specify the boot sequence of the disks. The first disk in the list is where GRUB 2 will be installed in the case of booting from MBR. It is the disk where openSUSE Leap is installed by default. The rest of the list is a hint for GRUB 2's device mapper (see Section 12.2.4, “Mapping between BIOS Drives and Linux Devices”).
The default value is usually valid for almost all deployments. If you change the boot order of disks wrongly, the system may become unbootable on the next reboot. For example, if the first disk in the list is not part of BIOS boot order, and the other disk in the list have empty MBRs.
Open the tab.
Click .
If more than one disk is listed, select a disk and click or to reorder the displayed disks.
Click two times to save the changes.
Advanced boot options can be configured via the tab.
Change the value of by typing in a new value and clicking the appropriate arrow key with your mouse.
When selected, the boot loader searches for other systems like Windows or other Linux installations.
Hides the boot menu and boots the default entry.
Select the desired entry from the “Default Boot Section” list. Note that the “>” sign in the boot entry name delimits the boot section and its subsection.
Protects the boot loader and the system with an additional password. For more information, see Section 12.2.6, “Setting a Boot Password”.
The option specifies the default screen resolution during the boot process.
The optional kernel parameters are added at the end of the default parameters. For a list of all possible parameters, see http://en.opensuse.org/Linuxrc.
When checked, the boot menu appears on a graphical splash screen rather than in a text mode. The resolution of the boot screen can be then set from the list, and graphical theme definition file can be specified with the file-chooser.
If your machine is controlled via a serial console, activate this option
and specify which COM port to use at which speed. See info
grub or
http://www.gnu.org/software/grub/manual/grub.html#Serial-terminal
On 3215 and 3270 terminals there are some differences and limitations on how to move the cursor and how to issue editing commands within GRUB 2.
Interactivity is strongly limited. Typing often does not result in visual feedback. To see where the cursor is, type an underscore (_).
The 3270 terminal is much better at displaying and refreshing screens than the 3215 terminal.
“Traditional” cursor movement is not possible. Alt, Meta, Ctrl and the cursor keys do not work. To move the cursor, use the key combinations listed in Section 12.4.2, “Key Combinations”.
The caret (^) is used as a control character. To type a literal ^ followed by a letter, type ^, ^, LETTER.
The Enter key does not work, use ^–J instead.
|
Common Substitutes: |
^–J |
engage (“Enter”) |
|
^–L |
abort, return to previous “state” | |
|
^–I |
tab completion (in edit and shell mode) | |
|
Keys Available in Menu Mode: |
^–A |
first entry |
|
^–E |
last entry | |
|
^–P |
previous entry | |
|
^–N |
next entry | |
|
^–G |
previous page | |
|
^–C |
next page | |
|
^–F |
boot selected entry or enter submenu (same as ^–J) | |
|
E |
edit selected entry | |
|
C |
enter GRUB-Shell | |
|
Keys Available in Edit Mode: |
^–P |
previous line |
|
^–N |
next line | |
|
^–B |
backward char | |
|
^–F |
forward char | |
|
^–A |
beginning of line | |
|
^–E |
end of line | |
|
^–H |
backspace | |
|
^–D |
delete | |
|
^–K |
kill line | |
|
^–Y |
yank | |
|
^–O |
open line | |
|
^–L |
refresh screen | |
|
^–X |
boot entry | |
|
^–C |
enter GRUB-Shell | |
|
Keys Available in Command Line Mode: |
^–P |
previous command |
|
^–N |
next command from history | |
|
^–A |
beginning of line | |
|
^–E |
end of line | |
|
^–B |
backward char | |
|
^–F |
forward char | |
|
^–H |
backspace | |
|
^–D |
delete | |
|
^–K |
kill line | |
|
^–U |
discard line | |
|
^–Y |
yank |
grub2-mkconfig
Generates a new /boot/grub2/grub.cfg based on
/etc/default/grub and the scripts from
/etc/grub.d/.
grub2-mkconfig -o /boot/grub2/grub.cfg
Running grub2-mkconfig without any parameters prints
the configuration to STDOUT where it can be reviewed. Use
grub2-script-check
after
/boot/grub2/grub.cfg has been written to check its
syntax.
grub2-mkconfig Cannot Repair UEFI Secure Boot TablesIf you are using UEFI Secure Boot and your system is not reaching GRUB 2 correctly anymore, you may need to additionally reinstall Shim and regenerate the UEFI boot table. To do so, use:
root # shim-install --config-file=/boot/grub2/grub.cfggrub2-mkrescueCreates a bootable rescue image of your installed GRUB 2 configuration.
grub2-mkrescue -o save_path/name.iso iso
grub2-script-check
Checks the given file for syntax errors.
grub2-script-check /boot/grub2/grub.cfg
grub2-once
Set the default boot entry for the next boot only. To get the list of
available boot entries use the --list option.
grub2-once number_of_the_boot_entry
grub2-once HelpCall the program without any option to get a full list of all possible options.
Extensive information about GRUB 2 is available at http://www.gnu.org/software/grub/. Also refer to the
grub info page.
Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.
Linux and other Unix operating systems use the TCP/IP protocol. It is not a single network protocol, but a family of network protocols that offer various services. The protocols listed in Several Protocols in the TCP/IP Protocol Family, are provided for exchanging data between two machines via TCP/IP. Networks combined by TCP/IP, comprising a worldwide network, are also called “the Internet.”
RFC stands for Request for Comments. RFCs are documents that describe various Internet protocols and implementation procedures for the operating system and its applications. The RFC documents describe the setup of Internet protocols. For more information about RFCs, see http://www.ietf.org/rfc.html.
Transmission Control Protocol: a connection-oriented secure protocol. The data to transmit is first sent by the application as a stream of data and converted into the appropriate format by the operating system. The data arrives at the respective application on the destination host in the original data stream format it was initially sent. TCP determines whether any data has been lost or jumbled during the transmission. TCP is implemented wherever the data sequence matters.
User Datagram Protocol: a connectionless, insecure protocol. The data to transmit is sent in the form of packets generated by the application. The order in which the data arrives at the recipient is not guaranteed and data loss is possible. UDP is suitable for record-oriented applications. It features a smaller latency period than TCP.
Internet Control Message Protocol: This is not a protocol for the end user, but a special control protocol that issues error reports and can control the behavior of machines participating in TCP/IP data transfer. In addition, it provides a special echo mode that can be viewed using the program ping.
Internet Group Management Protocol: This protocol controls machine behavior when implementing IP multicast.
As shown in Figure 13.1, “Simplified Layer Model for TCP/IP”, data exchange takes place in different layers. The actual network layer is the insecure data transfer via IP (Internet protocol). On top of IP, TCP (transmission control protocol) guarantees, to a certain extent, security of the data transfer. The IP layer is supported by the underlying hardware-dependent protocol, such as Ethernet.
The diagram provides one or two examples for each layer. The layers are ordered according to abstraction levels. The lowest layer is very close to the hardware. The uppermost layer, however, is almost a complete abstraction from the hardware. Every layer has its own special function. The special functions of each layer are mostly implicit in their description. The data link and physical layers represent the physical network used, such as Ethernet.
Almost all hardware protocols work on a packet-oriented basis. The data to transmit is collected into packets (it cannot be sent all at once). The maximum size of a TCP/IP packet is approximately 64 KB. Packets are normally quite smaller, as the network hardware can be a limiting factor. The maximum size of a data packet on an Ethernet is about fifteen hundred bytes. The size of a TCP/IP packet is limited to this amount when the data is sent over an Ethernet. If more data is transferred, more data packets need to be sent by the operating system.
For the layers to serve their designated functions, additional information regarding each layer must be saved in the data packet. This takes place in the header of the packet. Every layer attaches a small block of data, called the protocol header, to the front of each emerging packet. A sample TCP/IP data packet traveling over an Ethernet cable is illustrated in Figure 13.2, “TCP/IP Ethernet Packet”. The proof sum is located at the end of the packet, not at the beginning. This simplifies things for the network hardware.
When an application sends data over the network, the data passes through each layer, all implemented in the Linux kernel except the physical layer. Each layer is responsible for preparing the data so it can be passed to the next layer. The lowest layer is ultimately responsible for sending the data. The entire procedure is reversed when data is received. Like the layers of an onion, in each layer the protocol headers are removed from the transported data. Finally, the transport layer is responsible for making the data available for use by the applications at the destination. In this manner, one layer only communicates with the layer directly above or below it. For applications, it is irrelevant whether data is transmitted via a 100 Mbit/s FDDI network or via a 56-Kbit/s modem line. Likewise, it is irrelevant for the data line which kind of data is transmitted, as long as packets are in the correct format.
The discussion in this section is limited to IPv4 networks. For information about IPv6 protocol, the successor to IPv4, refer to Section 13.2, “IPv6—The Next Generation Internet”.
Every computer on the Internet has a unique 32-bit address. These 32 bits (or 4 bytes) are normally written as illustrated in the second row in Example 13.1, “Writing IP Addresses”.
IP Address (binary): 11000000 10101000 00000000 00010100 IP Address (decimal): 192. 168. 0. 20
In decimal form, the four bytes are written in the decimal number system, separated by periods. The IP address is assigned to a host or a network interface. It can be used only once throughout the world. There are exceptions to this rule, but these are not relevant to the following passages.
The points in IP addresses indicate the hierarchical system. Until the 1990s, IP addresses were strictly categorized in classes. However, this system proved too inflexible and was discontinued. Now, classless routing (CIDR, classless interdomain routing) is used.
Netmasks are used to define the address range of a subnet. If two hosts are in the same subnet, they can reach each other directly. If they are not in the same subnet, they need the address of a gateway that handles all the traffic for the subnet. To check if two IP addresses are in the same subnet, simply “AND” both addresses with the netmask. If the result is identical, both IP addresses are in the same local network. If there are differences, the remote IP address, and thus the remote interface, can only be reached over a gateway.
To understand how the netmask works, look at
Example 13.2, “Linking IP Addresses to the Netmask”. The netmask consists of 32 bits
that identify how much of an IP address belongs to the network. All those
bits that are 1 mark the corresponding bit in the IP
address as belonging to the network. All bits that are 0
mark bits inside the subnet. This means that the more bits are
1, the smaller the subnet is. Because the netmask always
consists of several successive 1 bits, it is also
possible to count the number of bits in the netmask. In
Example 13.2, “Linking IP Addresses to the Netmask” the first net with 24 bits could
also be written as 192.168.0.0/24.
IP address (192.168.0.20): 11000000 10101000 00000000 00010100 Netmask (255.255.255.0): 11111111 11111111 11111111 00000000 --------------------------------------------------------------- Result of the link: 11000000 10101000 00000000 00000000 In the decimal system: 192. 168. 0. 0 IP address (213.95.15.200): 11010101 10111111 00001111 11001000 Netmask (255.255.255.0): 11111111 11111111 11111111 00000000 --------------------------------------------------------------- Result of the link: 11010101 10111111 00001111 00000000 In the decimal system: 213. 95. 15. 0
To give another example: all machines connected with the same Ethernet cable are usually located in the same subnet and are directly accessible. Even when the subnet is physically divided by switches or bridges, these hosts can still be reached directly.
IP addresses outside the local subnet can only be reached if a gateway is configured for the target network. In the most common case, there is only one gateway that handles all traffic that is external. However, it is also possible to configure several gateways for different subnets.
If a gateway has been configured, all external IP packets are sent to the appropriate gateway. This gateway then attempts to forward the packets in the same manner—from host to host—until it reaches the destination host or the packet's TTL (time to live) expires.
This is the netmask AND any address in the network, as shown in
Example 13.2, “Linking IP Addresses to the Netmask” under Result.
This address cannot be assigned to any hosts.
This could be paraphrased as: “Access all hosts in this subnet.” To generate this, the netmask is inverted in binary form and linked to the base network address with a logical OR. The above example therefore results in 192.168.0.255. This address cannot be assigned to any hosts.
The address 127.0.0.1 is
assigned to the “loopback device” on each host. A
connection can be set up to your own machine with this address and with
all addresses from the complete
127.0.0.0/8 loopback network
as defined with IPv4. With IPv6 there is only one loopback address
(::1).
Because IP addresses must be unique all over the world, you cannot select random addresses. There are three address domains to use if you want to set up a private IP-based network. These cannot get any connection from the rest of the Internet, because they cannot be transmitted over the Internet. These address domains are specified in RFC 1597 and listed in Table 13.1, “Private IP Address Domains”.
|
Network/Netmask |
Domain |
|---|---|
|
|
|
|
|
|
|
|
|
Due to the emergence of the World Wide Web (WWW), the Internet has experienced explosive growth, with an increasing number of computers communicating via TCP/IP in the past fifteen years. Since Tim Berners-Lee at CERN (http://public.web.cern.ch) invented the WWW in 1990, the number of Internet hosts has grown from a few thousand to about a hundred million.
As mentioned, an IPv4 address consists of only 32 bits. Also, quite a few IP addresses are lost—they cannot be used because of the way in which networks are organized. The number of addresses available in your subnet is two to the power of the number of bits, minus two. A subnet has, for example, 2, 6, or 14 addresses available. To connect 128 hosts to the Internet, for example, you need a subnet with 256 IP addresses, from which only 254 are usable, because two IP addresses are needed for the structure of the subnet itself: the broadcast and the base network address.
Under the current IPv4 protocol, DHCP or NAT (network address translation) are the typical mechanisms used to circumvent the potential address shortage. Combined with the convention to keep private and public address spaces separate, these methods can certainly mitigate the shortage. The problem with them lies in their configuration, which is a chore to set up and a burden to maintain. To set up a host in an IPv4 network, you need several address items, such as the host's own IP address, the subnetmask, the gateway address and maybe a name server address. All these items need to be known and cannot be derived from somewhere else.
With IPv6, both the address shortage and the complicated configuration should be a thing of the past. The following sections tell more about the improvements and benefits brought by IPv6 and about the transition from the old protocol to the new one.
The most important and most visible improvement brought by the new protocol is the enormous expansion of the available address space. An IPv6 address is made up of 128 bit values instead of the traditional 32 bits. This provides for as many as several quadrillion IP addresses.
However, IPv6 addresses are not only different from their predecessors with regard to their length. They also have a different internal structure that may contain more specific information about the systems and the networks to which they belong. More details about this are found in Section 13.2.2, “Address Types and Structure”.
The following is a list of other advantages of the new protocol:
IPv6 makes the network “plug and play” capable, which means that a newly set up system integrates into the (local) network without any manual configuration. The new host uses its automatic configuration mechanism to derive its own address from the information made available by the neighboring routers, relying on a protocol called the neighbor discovery (ND) protocol. This method does not require any intervention on the administrator's part and there is no need to maintain a central server for address allocation—an additional advantage over IPv4, where automatic address allocation requires a DHCP server.
Nevertheless if a router is connected to a switch, the router should
send periodic advertisements with flags telling the hosts of a network
how they should interact with each other. For more information, see
RFC 2462 and the radvd.conf(5) man page, and
RFC 3315.
IPv6 makes it possible to assign several addresses to one network interface at the same time. This allows users to access several networks easily, something that could be compared with the international roaming services offered by mobile phone companies. When you take your mobile phone abroad, the phone automatically logs in to a foreign service when it enters the corresponding area, so you can be reached under the same number everywhere and can place an outgoing call, as you would in your home area.
With IPv4, network security is an add-on function. IPv6 includes IPsec as one of its core features, allowing systems to communicate over a secure tunnel to avoid eavesdropping by outsiders on the Internet.
Realistically, it would be impossible to switch the entire Internet from IPv4 to IPv6 at one time. Therefore, it is crucial that both protocols can coexist not only on the Internet, but also on one system. This is ensured by compatible addresses (IPv4 addresses can easily be translated into IPv6 addresses) and by using several tunnels. See Section 13.2.3, “Coexistence of IPv4 and IPv6”. Also, systems can rely on a dual stack IP technique to support both protocols at the same time, meaning that they have two network stacks that are completely separate, such that there is no interference between the two protocol versions.
With IPv4, some services, such as SMB, need to broadcast their packets to all hosts in the local network. IPv6 allows a much more fine-grained approach by enabling servers to address hosts through multicasting, that is by addressing several hosts as parts of a group. This is different from addressing all hosts through broadcasting or each host individually through unicasting). Which hosts are addressed as a group may depend on the concrete application. There are some predefined groups to address all name servers (the all name servers multicast group), for example, or all routers (the all routers multicast group).
As mentioned, the current IP protocol has two major limitations: there is an increasing shortage of IP addresses, and configuring the network and maintaining the routing tables is becoming a more complex and burdensome task. IPv6 solves the first problem by expanding the address space to 128 bits. The second one is mitigated by introducing a hierarchical address structure combined with sophisticated techniques to allocate network addresses, and multihoming (the ability to assign several addresses to one device, giving access to several networks).
When dealing with IPv6, it is useful to know about three different types of addresses:
Addresses of this type are associated with exactly one network interface. Packets with such an address are delivered to only one destination. Accordingly, unicast addresses are used to transfer packets to individual hosts on the local network or the Internet.
Addresses of this type relate to a group of network interfaces. Packets with such an address are delivered to all destinations that belong to the group. Multicast addresses are mainly used by certain network services to communicate with certain groups of hosts in a well-directed manner.
Addresses of this type are related to a group of interfaces. Packets with such an address are delivered to the member of the group that is closest to the sender, according to the principles of the underlying routing protocol. Anycast addresses are used to make it easier for hosts to find out about servers offering certain services in the given network area. All servers of the same type have the same anycast address. Whenever a host requests a service, it receives a reply from the server with the closest location, as determined by the routing protocol. If this server should fail for some reason, the protocol automatically selects the second closest server, then the third one, and so forth.
An IPv6 address is made up of eight four-digit fields, each representing 16
bits, written in hexadecimal notation. They are separated by colons
(:). Any leading zero bytes within a given field may be
dropped, but zeros within the field or at its end may not. Another
convention is that more than four consecutive zero bytes may be collapsed
into a double colon. However, only one such :: is
allowed per address. This kind of shorthand notation is shown in
Example 13.3, “Sample IPv6 Address”, where all three lines represent the
same address.
fe80 : 0000 : 0000 : 0000 : 0000 : 10 : 1000 : 1a4 fe80 : 0 : 0 : 0 : 0 : 10 : 1000 : 1a4 fe80 : : 10 : 1000 : 1a4
Each part of an IPv6 address has a defined function. The first bytes form
the prefix and specify the type of address. The center part is the network
portion of the address, but it may be unused. The end of the address forms
the host part. With IPv6, the netmask is defined by indicating the length
of the prefix after a slash at the end of the address. An address, as shown
in Example 13.4, “IPv6 Address Specifying the Prefix Length”, contains the information that
the first 64 bits form the network part of the address and the last 64 form
its host part. In other words, the 64 means that the
netmask is filled with 64 1-bit values from the left. As with IPv4, the IP
address is combined with AND with the values from the netmask to determine
whether the host is located in the same subnet or in another one.
fe80::10:1000:1a4/64
IPv6 knows about several predefined types of prefixes. Some are shown in Various IPv6 Prefixes.
00
IPv4 addresses and IPv4 over IPv6 compatibility addresses. These are used to maintain compatibility with IPv4. Their use still requires a router able to translate IPv6 packets into IPv4 packets. Several special addresses, such as the one for the loopback device, have this prefix as well.
2 or
3 as the
first digit
Aggregatable global unicast addresses. As is the case with IPv4, an
interface can be assigned to form part of a certain subnet. Currently,
there are the following address spaces:
2001::/16 (production quality
address space) and 2002::/16
(6to4 address space).
fe80::/10
Link-local addresses. Addresses with this prefix should not be routed and should therefore only be reachable from within the same subnet.
fec0::/10
Site-local addresses. These may be routed, but only within the network
of the organization to which they belong. In effect, they are the IPv6
equivalent of the current private network address space, such as
10.x.x.x.
ff
These are multicast addresses.
A unicast address consists of three basic components:
The first part (which also contains one of the prefixes mentioned above) is used to route packets through the public Internet. It includes information about the company or institution that provides the Internet access.
The second part contains routing information about the subnet to which to deliver the packet.
The third part identifies the interface to which to deliver the packet.
This also allows for the MAC to form part of the address. Given that the
MAC is a globally unique, fixed identifier coded into the device by the
hardware maker, the configuration procedure is substantially simplified.
In fact, the first 64 address bits are consolidated to form the
EUI-64 token, with the last 48 bits taken from the
MAC, and the remaining 24 bits containing special information about the
token type. This also makes it possible to assign an
EUI-64 token to interfaces that do not have a MAC,
such as those based on PPP.
On top of this basic structure, IPv6 distinguishes between five different types of unicast addresses:
:: (unspecified) This address is used by the host as its source address when the interface is initialized for the first time (at which point, the address cannot yet be determined by other means).
::1 (loopback) The address of the loopback device.
The IPv6 address is formed by the IPv4 address and a prefix consisting of 96 zero bits. This type of compatibility address is used for tunneling (see Section 13.2.3, “Coexistence of IPv4 and IPv6”) to allow IPv4 and IPv6 hosts to communicate with others operating in a pure IPv4 environment.
This type of address specifies a pure IPv4 address in IPv6 notation.
There are two address types for local use:
This type of address can only be used in the local subnet. Packets
with a source or target address of this type should not be routed to
the Internet or other subnets. These addresses contain a special
prefix (fe80::/10) and the
interface ID of the network card, with the middle part consisting of
zero bytes. Addresses of this type are used during automatic
configuration to communicate with other hosts belonging to the same
subnet.
Packets with this type of address may be routed to other subnets, but
not to the wider Internet—they must remain inside the
organization's own network. Such addresses are used for intranets and
are an equivalent of the private address space defined by IPv4. They
contain a special prefix
(fec0::/10), the interface
ID, and a 16 bit field specifying the subnet ID. Again, the rest is
filled with zero bytes.
As a completely new feature introduced with IPv6, each network interface normally gets several IP addresses, with the advantage that several networks can be accessed through the same interface. One of these networks can be configured completely automatically using the MAC and a known prefix with the result that all hosts on the local network can be reached when IPv6 is enabled (using the link-local address). With the MAC forming part of it, any IP address used in the world is unique. The only variable parts of the address are those specifying the site topology and the public topology, depending on the actual network in which the host is currently operating.
For a host to go back and forth between different networks, it needs at least two addresses. One of them, the home address, not only contains the interface ID but also an identifier of the home network to which it normally belongs (and the corresponding prefix). The home address is a static address and, as such, it does not normally change. Still, all packets destined to the mobile host can be delivered to it, regardless of whether it operates in the home network or somewhere outside. This is made possible by the completely new features introduced with IPv6, such as stateless autoconfiguration and neighbor discovery. In addition to its home address, a mobile host gets one or more additional addresses that belong to the foreign networks where it is roaming. These are called care-of addresses. The home network has a facility that forwards any packets destined to the host when it is roaming outside. In an IPv6 environment, this task is performed by the home agent, which takes all packets destined to the home address and relays them through a tunnel. On the other hand, those packets destined to the care-of address are directly transferred to the mobile host without any special detours.
The migration of all hosts connected to the Internet from IPv4 to IPv6 is a gradual process. Both protocols will coexist for some time to come. The coexistence on one system is guaranteed where there is a dual stack implementation of both protocols. That still leaves the question of how an IPv6 enabled host should communicate with an IPv4 host and how IPv6 packets should be transported by the current networks, which are predominantly IPv4-based. The best solutions offer tunneling and compatibility addresses (see Section 13.2.2, “Address Types and Structure”).
IPv6 hosts that are more or less isolated in the (worldwide) IPv4 network can communicate through tunnels: IPv6 packets are encapsulated as IPv4 packets to move them across an IPv4 network. Such a connection between two IPv4 hosts is called a tunnel. To achieve this, packets must include the IPv6 destination address (or the corresponding prefix) and the IPv4 address of the remote host at the receiving end of the tunnel. A basic tunnel can be configured manually according to an agreement between the hosts' administrators. This is also called static tunneling.
However, the configuration and maintenance of static tunnels is often too labor-intensive to use them for daily communication needs. Therefore, IPv6 provides for three different methods of dynamic tunneling:
IPv6 packets are automatically encapsulated as IPv4 packets and sent over an IPv4 network capable of multicasting. IPv6 is tricked into seeing the whole network (Internet) as a huge local area network (LAN). This makes it possible to determine the receiving end of the IPv4 tunnel automatically. However, this method does not scale very well and is also hampered because IP multicasting is far from widespread on the Internet. Therefore, it only provides a solution for smaller corporate or institutional networks where multicasting can be enabled. The specifications for this method are laid down in RFC 2529.
With this method, IPv4 addresses are automatically generated from IPv6 addresses, enabling isolated IPv6 hosts to communicate over an IPv4 network. However, several problems have been reported regarding the communication between those isolated IPv6 hosts and the Internet. The method is described in RFC 3056.
This method relies on special servers that provide dedicated tunnels for IPv6 hosts. It is described in RFC 3053.
To configure IPv6, you normally do not need to make any changes on the
individual workstations. IPv6 is enabled by default. To disable or enable
IPv6 on an installed system, use the YaST module. On the tab,
check or uncheck the option as necessary.
To enable it temporarily until the next reboot, enter
modprobe -i ipv6 as
root. It is impossible to unload
the IPv6 module after it has been loaded.
Because of the autoconfiguration concept of IPv6, the network card is assigned an address in the link-local network. Normally, no routing table management takes place on a workstation. The network routers can be queried by the workstation, using the router advertisement protocol, for what prefix and gateways should be implemented. The radvd program can be used to set up an IPv6 router. This program informs the workstations which prefix to use for the IPv6 addresses and which routers. Alternatively, use zebra/quagga for automatic configuration of both addresses and routing.
For information about how to set up various types of tunnels using the
/etc/sysconfig/network files, see the man page of
ifcfg-tunnel (man ifcfg-tunnel).
The above overview does not cover the topic of IPv6 comprehensively. For a more in-depth look at the new protocol, refer to the following online documentation and books:
The starting point for everything about IPv6.
All information needed to start your own IPv6 network.
The list of IPv6-enabled products.
Here, find the Linux IPv6-HOWTO and many links related to the topic.
The fundamental RFC about IPv6.
A book describing all the important aspects of the topic is IPv6 Essentials by Silvia Hagen (ISBN 0-596-00125-8).
DNS assists in assigning an IP address to one or more names and assigning a name to an IP address. In Linux, this conversion is usually carried out by a special type of software known as bind. The machine that takes care of this conversion is called a name server. The names make up a hierarchical system in which each name component is separated by a period. The name hierarchy is, however, independent of the IP address hierarchy described above.
Consider a complete name, such as
jupiter.example.com, written in the
format hostname.domain. A full
name, called a fully qualified domain name (FQDN),
consists of a host name and a domain name
(example.com). The latter
also includes the top level domain or TLD
(com).
TLD assignment has become quite confusing for historical reasons.
Traditionally, three-letter domain names are used in the USA. In the rest of
the world, the two-letter ISO national codes are the standard. In addition
to that, longer TLDs were introduced in 2000 that represent certain spheres
of activity (for example, .info,
.name,
.museum).
In the early days of the Internet (before 1990), the file
/etc/hosts was used to store the names of all the
machines represented over the Internet. This quickly proved to be
impractical in the face of the rapidly growing number of computers connected
to the Internet. For this reason, a decentralized database was developed to
store the host names in a widely distributed manner. This database, similar
to the name server, does not have the data pertaining to all hosts in the
Internet readily available, but can dispatch requests to other name servers.
The top of the hierarchy is occupied by root name servers. These root name servers manage the top level domains and are run by the Network Information Center (NIC). Each root name server knows about the name servers responsible for a given top level domain. Information about top level domain NICs is available at http://www.internic.net.
DNS can do more than resolve host names. The name server also knows which host is receiving e-mails for an entire domain—the mail exchanger (MX).
For your machine to resolve an IP address, it must know about at least one name server and its IP address. Easily specify such a name server using YaST. The configuration of name server access with openSUSE® Leap is described in Section 13.4.1.4, “Configuring Host Name and DNS”. Setting up your own name server is described in Chapter 19, The Domain Name System.
The protocol whois is closely related to DNS. With this
program, quickly find out who is responsible for a given domain.
The .local top level domain is treated as link-local
domain by the resolver. DNS requests are send as multicast DNS requests
instead of normal DNS requests. If you already use the
.local domain in your name server configuration, you
must switch this option off in /etc/host.conf. For
more information, see the host.conf manual page.
If you want to switch off MDNS during installation, use
nomdns=1 as a boot parameter.
For more information on multicast DNS, see http://www.multicastdns.org.
There are many supported networking types on Linux. Most of them use different device names and the configuration files are spread over several locations in the file system. For a detailed overview of the aspects of manual network configuration, see Section 13.6, “Configuring a Network Connection Manually”.
All network interfaces with link up (with a network cable connected) are automatically configured. Additional hardware can be configured any time on the installed system. The following sections describe the network configuration for all types of network connections supported by openSUSE Leap.
To configure your Ethernet or Wi-Fi/Bluetooth card in YaST, select › . After starting the module, YaST displays the dialog with four tabs: , , and .
The tab allows you to set general networking options such as the network setup method, IPv6, and general DHCP options. For more information, see Section 13.4.1.1, “Configuring Global Networking Options”.
The tab contains information about installed network interfaces and configurations. Any properly detected network card is listed with its name. You can manually configure new cards, remove or change their configuration in this dialog. To manually configure a card that was not automatically detected, see Section 13.4.1.3, “Configuring an Undetected Network Card”. If you want to change the configuration of an already configured card, see Section 13.4.1.2, “Changing the Configuration of a Network Card”.
The tab allows to set the host name of the machine and name the servers to be used. For more information, see Section 13.4.1.4, “Configuring Host Name and DNS”.
The tab is used for the configuration of routing. See Section 13.4.1.5, “Configuring Routing” for more information.
The tab of the YaST module allows you to set important global networking options, such as the use of NetworkManager, IPv6 and DHCP client options. These settings are applicable for all network interfaces.
In the choose the way network
connections are managed. If you want a NetworkManager desktop applet to manage
connections for all interfaces, choose .
NetworkManager is well suited for switching between multiple wired and wireless
networks. If you do not run a desktop environment, or if your computer is a
Xen server, virtual system, or provides network services such as DHCP or
DNS in your network, use the method. If
NetworkManager is used, nm-applet should be used to configure
network options and the ,
and tabs of the
module are disabled.
For more information on NetworkManager, see
Chapter 28, Using NetworkManager.
In the choose whether to use the IPv6 protocol. It is possible to use IPv6 together with IPv4. By default, IPv6 is enabled. However, in networks not using IPv6 protocol, response times can be faster with IPv6 protocol disabled. To disable IPv6, deactivate . If IPv6 is disabled, the kernel no longer loads the IPv6 module automatically. This setting will be applied after reboot.
In the configure options for the DHCP client. The must be different for each DHCP client on a single network. If left empty, it defaults to the hardware address of the network interface. However, if you are running several virtual machines using the same network interface and, therefore, the same hardware address, specify a unique free-form identifier here.
The specifies a string used for the
host name option field when the DHCP client sends messages to DHCP server.
Some DHCP servers update name server zones (forward and reverse records)
according to this host name (Dynamic DNS). Also, some DHCP servers require
the option field to contain a specific
string in the DHCP messages from clients. Leave AUTO to
send the current host name (that is the one defined in
/etc/HOSTNAME). Make the option field empty for not
sending any host name.
If you do not want to change the default route according to the information from DHCP, deactivate .
To change the configuration of a network card, select a card from the list of the detected cards in › in YaST and click . The dialog appears in which to adjust the card configuration using the , and tabs.
You can set the IP address of the network card or the way its IP address is determined in the tab of the dialog. Both IPv4 and IPv6 addresses are supported. The network card can have (which is useful for bonding devices), a (IPv4 or IPv6) or a assigned via or or both.
If using , select whether to use (for IPv4), (for IPv6) or .
If possible, the first network card with link that is available during the installation is automatically configured to use automatic address setup via DHCP.
DHCP should also be used if you are using a DSL line but with no static IP assigned by the ISP (Internet Service Provider). If you decide to use DHCP, configure the details in in the tab of the dialog of the YaST network card configuration module. If you have a virtual host setup where different hosts communicate through the same interface, an is necessary to distinguish them.
DHCP is a good choice for client configuration but it is not ideal for server configuration. To set a static IP address, proceed as follows:
Select a card from the list of detected cards in the tab of the YaST network card configuration module and click .
In the tab, choose .
Enter the . Both IPv4 and IPv6 addresses
can be used. Enter the network mask in .
If the IPv6 address is used, use for
prefix length in format /64.
Optionally, you can enter a fully qualified
for this address, which will be written to the
/etc/hosts configuration file.
Click .
To activate the configuration, click .
During activation of a network interface, wicked
checks for a carrier and only applies the IP configuration when a link
has been detected. If you need to apply the configuration regardless of
the link status (for example, when you want to test a service listening to a
certain address), you can skip link detection by adding the variable
LINK_REQUIRED=no to the configuration file of the
interface in /etc/sysconfig/network/ifcfg.
Additionally, you can use the variable
LINK_READY_WAIT=5 to
specify the timeout for waiting for a link in seconds.
For more information about the ifcfg-* configuration
files, refer to Section 13.6.2.5, “/etc/sysconfig/network/ifcfg-*” and
man 5 ifcfg.
If you use the static address, the name servers and default gateway are not configured automatically. To configure name servers, proceed as described in Section 13.4.1.4, “Configuring Host Name and DNS”. To configure a gateway, proceed as described in Section 13.4.1.5, “Configuring Routing”.
One network device can have multiple IP addresses.
These so-called aliases or labels, respectively, work with IPv4 only.
With IPv6 they will be ignored. Using iproute2 network
interfaces can have one or more addresses.
Using YaST to set additional addresses for your network card, proceed as follows:
Select a card from the list of detected cards in the tab of the YaST dialog and click .
In the › tab, click .
Enter , , and . Do not include the interface name in the alias name.
To activate the configuration, confirm the settings.
It is possible to change the device name of the network card when it is used. It is also possible to determine whether the network card should be identified by udev via its hardware (MAC) address or via the bus ID. The later option is preferable in large servers to simplify hotplugging of cards. To set these options with YaST, proceed as follows:
Select a card from the list of detected cards in the tab of the YaST dialog and click .
Go to the tab. The current device name is shown in . Click .
Select whether udev should identify the card by its or . The current MAC address and bus ID of the card are shown in the dialog.
To change the device name, check the option and edit the name.
To activate the configuration, confirm the settings.
For some network cards, several kernel drivers may be available. If the card is already configured, YaST allows you to select a kernel driver to be used from a list of available suitable drivers. It is also possible to specify options for the kernel driver. To set these options with YaST, proceed as follows:
Select a card from the list of detected cards in the tab of the YaST Network Settings module and click .
Go to the tab.
Select the kernel driver to be used in .
Enter any options for the selected driver in
in the form
= =VALUE. If more options
are used, they should be space-separated.
To activate the configuration, confirm the settings.
If you use the method with wicked, you can configure
your device to either start during boot, on cable connection, on card
detection, manually, or never. To change device start-up, proceed as
follows:
In YaST select a card from the list of detected cards in › and click .
In the tab, select the desired entry from .
Choose to start the device during the
system boot. With , the interface
is watched for any existing physical connection. With , the interface is set when available. It is similar to
the option, and only differs in that no
error occurs if the interface is not present at boot time. Choose
to control the interface manually with
ifup. Choose to not start
the device. The is similar to , but the interface does not shut down with the
systemctl stop network command; the
network service also cares about the
wicked service if wicked is active.
Use this if you use an NFS or iSCSI root file system.
To activate the configuration, confirm the settings.
On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.
When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 13.4.1.2.5, “Activating the Network Device” and choose in the pane.
You can set a maximum transmission unit (MTU) for the interface. MTU refers to the largest allowed packet size in bytes. A higher MTU brings higher bandwidth efficiency. However, large packets can block up a slow interface for some time, increasing the lag for further packets.
In YaST select a card from the list of detected cards in › and click .
In the tab, select the desired entry from the list.
To activate the configuration, confirm the settings.
Multifunction devices that support LAN, iSCSI, and FCoE are supported.
YaST FCoE client (yast2 fcoe-client) shows the private
flags in additional columns to allow the user to select the device meant
for FCoE. YaST network module (yast2 lan) excludes
“storage only devices” for network configuration.
In YaST select the InfiniBand device in › and click .
In the tab, select one of the (IPoIB) modes: (default) or .
To activate the configuration, confirm the settings.
For more information about InfiniBand, see
/usr/src/linux/Documentation/infiniband/ipoib.txt.
Without having to perform the detailed firewall setup as described in
Section 15.4, “firewalld”, you can determine the
basic firewall configuration for your device as part of the device setup.
Proceed as follows:
Open the YaST › module. In the tab, select a card from the list of detected cards and click .
Enter the tab of the dialog.
Determine the to which your interface should be assigned. The following options are available:
This option is available only if the firewall is disabled and the firewall does not run. Only use this option if your machine is part of a greater network that is protected by an outer firewall.
This option is available only if the firewall is enabled. The
firewall is running and the interface is automatically assigned to a
firewall zone. The zone which contains the keyword
any or the external zone will be used for such an
interface.
The firewall is running, but does not enforce any rules to protect this interface. Use this option if your machine is part of a greater network that is protected by an outer firewall. It is also useful for the interfaces connected to the internal network, when the machine has more network interfaces.
A demilitarized zone is an additional line of defense in front of an internal network and the (hostile) Internet. Hosts assigned to this zone can be reached from the internal network and from the Internet, but cannot access the internal network.
The firewall is running on this interface and fully protects it against other—presumably hostile—network traffic. This is the default option.
To activate the configuration, confirm the settings.
If a network card is not detected correctly, the card is not included in the list of detected cards. If you are sure that your system includes a driver for your card, you can configure it manually. You can also configure special network device types, such as bridge, bond, TUN or TAP. To configure an undetected network card (or a special device) proceed as follows:
In the › › dialog in YaST click .
In the dialog, set the of the interface from the available options and . If the network card is an USB device, activate the respective check box and exit this dialog with . Otherwise, you can define the kernel to be used for the card and its , if necessary.
In , you can set
ethtool options used by ifup for
the interface. For information about available options, see the
ethtool manual page.
If the option string starts with a
- (for example, -K
INTERFACE_NAME rx on), the second
word in the string is replaced with the current interface name. Otherwise
(for example, autoneg off speed 10)
ifup adds -s
INTERFACE_NAME to the beginning.
Click .
Configure any needed options, such as the IP address, device activation or firewall zone for the interface in the , , and tabs. For more information about the configuration options, see Section 13.4.1.2, “Changing the Configuration of a Network Card”.
If you selected as the device type of the interface, configure the wireless connection in the next dialog.
To activate the new network configuration, confirm the settings.
If you did not change the network configuration during installation and the Ethernet card was already available, a host name was automatically generated for your computer and DHCP was activated. The same applies to the name service information your host needs to integrate into a network environment. If DHCP is used for network address setup, the list of domain name servers is automatically filled with the appropriate data. If a static setup is preferred, set these values manually.
To change the name of your computer and adjust the name server search list, proceed as follows:
Go to the › tab in the module in YaST.
Enter the and, if needed, the . The domain is especially important if the machine is a mail server. Note that the host name is global and applies to all set network interfaces.
If you are using DHCP to get an IP address, the host name of your computer will be automatically set by the DHCP. You should disable this behavior if you connect to different networks, because they may assign different host names and changing the host name at runtime may confuse the graphical desktop. To disable using DHCP to get an IP address deactivate .
associates your host
name with 127.0.0.2 (loopback) IP address in
/etc/hosts. This is a useful option if you want to
have the host name resolvable at all times, even without active network.
In , select the way the DNS
configuration (name servers, search list, the content of the
/etc/resolv.conf file) is modified.
If the option is selected, the
configuration is handled by the netconfig script which
merges the data defined statically (with YaST or in the configuration
files) with data obtained dynamically (from the DHCP client or
NetworkManager). This default policy is usually sufficient.
If the option is selected,
netconfig is not allowed to modify the
/etc/resolv.conf file. However, this file can be
edited manually.
If the option is selected, a
string defining the merge policy
should be specified. The string consists of a comma-separated list of
interface names to be considered a valid source of settings. Except for
complete interface names, basic wild cards to match multiple interfaces
are allowed, as well. For example, eth* ppp? will
first target all eth and then all ppp0-ppp9 interfaces. There are two
special policy values that indicate how to apply the static settings
defined in the /etc/sysconfig/network/config file:
STATIC
The static settings need to be merged together with the dynamic settings.
STATIC_FALLBACK
The static settings are used only when no dynamic configuration is available.
For more information, see the man page of netconfig(8)
(man 8 netconfig).
Enter the and fill in the list. Name servers must be specified by IP addresses, such as 192.168.1.116, not by host names. Names specified in the tab are domain names used for resolving host names without a specified domain. If more than one is used, separate domains with commas or white space.
To activate the configuration, confirm the settings.
It is also possible to edit the host name using YaST from the command
line. The changes made by YaST take effect immediately (which is not the
case when editing the /etc/HOSTNAME file manually). To
change the host name, use the following command:
root # yast dns edit hostname=HOSTNAMETo change the name servers, use the following commands:
root #yast dns edit nameserver1=192.168.1.116root #yast dns edit nameserver2=192.168.1.117root #yast dns edit nameserver3=192.168.1.118
To make your machine communicate with other machines and other networks, routing information must be given to make network traffic take the correct path. If DHCP is used, this information is automatically provided. If a static setup is used, this data must be added manually.
In YaST go to › .
Enter the IP address of the (IPv4 and IPv6 if necessary). The default gateway matches every possible destination, but if a routing table entry exists that matches the required address, this will be used instead of the default route via the Default Gateway.
More entries can be entered in the .
Enter the network IP address,
IP address and the .
Select the through which the traffic to the
defined network will be routed (the minus sign stands for any device).
To omit any of these values, use the minus sign -. To
enter a default gateway into the table, use default in
the field.
If more default routes are used, it is possible to specify the metric
option to determine which route has a higher priority. To specify the
metric option, enter - metric
NUMBER in
. The route with the highest metric is used as
default. If the network device is disconnected, its route will be
removed and the next one will be used. However, the current kernel does
not use metric in static routing, only routing daemons like
multipathd do.
If the system is a router, enable and in the as needed.
To activate the configuration, confirm the settings.
NetworkManager is the ideal solution for laptops and other portable computers. With NetworkManager, you do not need to worry about configuring network interfaces and switching between networks when you are moving.
wicked #
However, NetworkManager is not a suitable solution for all cases, so you can
still choose between the wicked controlled method for
managing network connections and NetworkManager. If you want to manage your
network connection with NetworkManager, enable NetworkManager in the YaST Network
Settings module as described in Section 28.2, “Enabling or Disabling NetworkManager” and
configure your network connections with NetworkManager. For a list of use cases
and a detailed description of how to configure and use NetworkManager, refer to
Chapter 28, Using NetworkManager.
Some differences between wicked and NetworkManager:
root Privileges
If you use NetworkManager for network setup, you can easily switch, stop or
start your network connection at any time from within your desktop
environment using an applet. NetworkManager also makes it possible to change
and configure wireless card connections without requiring
root privileges. For this reason, NetworkManager is the ideal
solution for a mobile workstation.
wicked also provides some ways to switch, stop or
start the connection with or without user intervention, like
user-managed devices. However, this always requires root
privileges to change or configure a network device. This is often a
problem for mobile computing, where it is not possible to preconfigure
all the connection possibilities.
Both wicked and NetworkManager can handle network
connections with a wireless network (with WEP, WPA-PSK, and
WPA-Enterprise access) and wired networks using DHCP and static
configuration. They also support connection through dial-up and VPN.
With NetworkManager you can also connect a mobile broadband (3G) modem
or set up a DSL connection, which is not possible with the traditional
configuration.
NetworkManager tries to keep your computer connected at all times using the
best connection available. If the network cable is accidentally
disconnected, it tries to reconnect. It can find the network with the
best signal strength from the list of your wireless connections and
automatically use it to connect. To get the same functionality with
wicked, more configuration effort is required.
The individual network connection settings created with NetworkManager are
stored in configuration profiles. The system
connections configured with either NetworkManager or YaST are saved in
/etc/NetworkManager/system-connections/* or in
/etc/sysconfig/network/ifcfg-*. For GNOME, all
user-defined connections are stored in GConf.
In case no profile is configured, NetworkManager automatically creates one and
names it Auto $INTERFACE-NAME. That is made in an
attempt to work without any configuration for as many cases as (securely)
possible. If the automatically created profiles do not suit your needs,
use the network connection configuration dialogs provided by GNOME to
modify them as desired. For more information, see
Section 28.3, “Configuring Network Connections”.
On centrally administered machines, certain NetworkManager features can be controlled or disabled with PolKit, for example if a user is allowed to modify administrator defined connections or if a user is allowed to define his own network configurations. To view or change the respective NetworkManager policies, start the graphical tool for PolKit. In the tree on the left side, find them below the entry. For an introduction to PolKit and details on how to use it, refer to Chapter 9, Authorization with PolKit.
Manual configuration of the network software should be the last alternative. Using YaST is recommended. However, this background information about the network configuration can also assist your work with YaST.
wicked Network Configuration #
The tool and library called wicked provides a new
framework for network configuration.
One of the challenges with traditional network interface management is that different layers of network management get jumbled together into one single script, or at most two different scripts. These scripts interact with each other in a way that is not well-defined. This leads to unpredictable issues, obscure constraints and conventions, etc. Several layers of special hacks for a variety of different scenarios increase the maintenance burden. Address configuration protocols are being used that are implemented via daemons like dhcpcd, which interact rather poorly with the rest of the infrastructure. Funky interface naming schemes that require heavy udev support are introduced to achieve persistent identification of interfaces.
The idea of wicked is to decompose the problem in several ways. None of them is entirely novel, but trying to put ideas from different projects together is hopefully going to create a better solution overall.
One approach is to use a client/server model. This allows wicked to define standardized facilities for things like address configuration that are well integrated with the overall framework. For example, using a specific address configuration, the administrator may request that an interface should be configured via DHCP or IPv4 zeroconf. In this case, the address configuration service simply obtains the lease from its server and passes it on to the wicked server process that installs the requested addresses and routes.
The other approach to decomposing the problem is to enforce the layering aspect. For any type of network interface, it is possible to define a dbus service that configures the network interface's device layer—a VLAN, a bridge, a bonding, or a paravirtualized device. Common functionality, such as address configuration, is implemented by joint services that are layered on top of these device specific services without having to implement them specifically.
The wicked framework implements these two aspects by using a variety of dbus services, which get attached to a network interface depending on its type. Here is a rough overview of the current object hierarchy in wicked.
Each network interface is represented via a child object of
/org/opensuse/Network/Interfaces. The name of the
child object is given by its ifindex. For example, the loopback interface,
which usually gets ifindex 1, is
/org/opensuse/Network/Interfaces/1, the first
Ethernet interface registered is
/org/opensuse/Network/Interfaces/2.
Each network interface has a “class” associated with it, which
is used to select the dbus interfaces it supports. By default, each network
interface is of class netif, and
wickedd will automatically
attach all interfaces compatible with this class. In the current
implementation, this includes the following interfaces:
Generic network interface functions, such as taking the link up or down, assigning an MTU, etc.
Address configuration services for DHCP, IPv4 zeroconf, etc.
Beyond this, network interfaces may require or offer special configuration
mechanisms. For an Ethernet device, for example, you should be able to
control the link speed, offloading of checksumming, etc. To achieve this,
Ethernet devices have a class of their own, called
netif-ethernet, which is a subclass of
netif. As a consequence, the dbus interfaces assigned to
an Ethernet interface include all the services listed above, plus the
org.opensuse.Network.Ethernet service available only to objects belonging to the netif-ethernet
class.
Similarly, there exist classes for interface types like bridges, VLANs, bonds, or infinibands.
How do you interact with an interface like VLAN (which is really a virtual network interface that
sits on top of an Ethernet device) that needs to be created
first? For this, wicked defines factory
interfaces, such as
org.opensuse.Network.VLAN.Factory. Such a factory
interface offers a single function that lets you create an interface of the
requested type. These factory interfaces are attached to the
/org/opensuse/Network/Interfaces list node.
wicked Architecture and Features #
The wicked service comprises several parts as depicted
in Figure 13.4, “wicked architecture”.
wicked architecture #
wicked currently supports the following:
Configuration file back-ends to parse SUSE style
/etc/sysconfig/network files.
An internal configuration back-end to represent network interface configuration in XML.
Bring up and shutdown of “normal” network interfaces such as Ethernet or InfiniBand, VLAN, bridge, bonds, tun, tap, dummy, macvlan, macvtap, hsi, qeth, iucv, and wireless (currently limited to one wpa-psk/eap network) devices.
A built-in DHCPv4 client and a built-in DHCPv6 client.
The nanny daemon (enabled by default) helps to automatically bring up configured interfaces when the device is available (interface hotplugging) and set up the IP configuration when a link (carrier) is detected. See Section 13.6.1.3, “Nanny” for more information.
wicked was implemented as a group of DBus services
that are integrated with systemd. So the usual
systemctl commands will apply to
wicked.
wicked #
On openSUSE Leap wicked is running by default on
desktop or server hardware. On mobile hardware NetworkManager is running by
default. If you want to check what is currently enabled and whether it is
running, call:
systemctl status network
If wicked is enabled, you will see something along these
lines:
wicked.service - wicked managed network interfaces
Loaded: loaded (/usr/lib/systemd/system/wicked.service; enabled)
...
In case something different is running (for example, NetworkManager) and you want to
switch to wicked, first stop what is running and then
enable wicked:
systemctl is-active network && \ systemctl stop network systemctl enable --force wicked
This enables the wicked services, creates the
network.service to wicked.service
alias link, and starts the network at the next boot.
Starting the server process:
systemctl start wickedd
This starts wickedd (the main server) and associated
supplicants:
/usr/lib/wicked/bin/wickedd-auto4 --systemd --foreground /usr/lib/wicked/bin/wickedd-dhcp4 --systemd --foreground /usr/lib/wicked/bin/wickedd-dhcp6 --systemd --foreground /usr/sbin/wickedd --systemd --foreground /usr/sbin/wickedd-nanny --systemd --foreground
Then bringing up the network:
systemctl start wicked
Alternatively use the network.service alias:
systemctl start network
These commands are using the default or system configuration sources as
defined in /etc/wicked/client.xml.
To enable debugging, set WICKED_DEBUG in
/etc/sysconfig/network/config, for example:
WICKED_DEBUG="all"
Or, to omit some:
WICKED_DEBUG="all,-dbus,-objectmodel,-xpath,-xml"
Use the client utility to display interface information for all interfaces or the interface specified with IFNAME:
wicked show all wicked show IFNAME
In XML output:
wicked show-xml all wicked show-xml IFNAME
Bringing up one interface:
wicked ifup eth0 wicked ifup wlan0 ...
Because there is no configuration source specified, the wicked client
checks its default sources of configuration defined in
/etc/wicked/client.xml:
firmware: iSCSI Boot Firmware Table (iBFT)
compat: ifcfg
files—implemented for compatibility
Whatever wicked gets from those sources for a given
interface is applied. The intended order of importance is
firmware, then compat—this may
be changed in the future.
For more information, see the wicked man page.
Nanny is an event and policy driven daemon that is responsible for
asynchronous or unsolicited scenarios such as hotplugging devices. Thus the
nanny daemon helps with starting or restarting delayed or temporarily gone
devices. Nanny monitors device and link changes, and integrates new devices
defined by the current policy set. Nanny continues to set up even if
ifup already exited because of specified timeout
constraints.
By default, the nanny daemon is active on the system. It is enabled in the
/etc/wicked/common.xml configuration file:
<config> ... <use-nanny>true</use-nanny> </config>
This setting causes ifup and ifreload to apply a policy with the effective
configuration to the nanny daemon; then, nanny configures
wickedd and thus ensures
hotplug support. It waits in the background for events or changes (such as
new devices or carrier on).
For bonds and bridges, it may make sense to define the entire device topology in one file (ifcfg-bondX), and bring it up in one go. wicked then can bring up the whole configuration if you specify the top level interface names (of the bridge or bond):
wicked ifup br0
This command automatically sets up the bridge and its dependencies in the appropriate order without the need to list the dependencies (ports, etc.) separately.
To bring up multiple interfaces in one command:
wicked ifup bond0 br0 br1 br2
Or also all interfaces:
wicked ifup all
When you need to use tunnels with Wicked, the TUNNEL_DEVICE
is used for this. It permits to specify an optional device name to bind
the tunnel to the device. The tunneled packets will only be routed via this
device.
For more information, refer to man 5 ifcfg-tunnel.
With wicked, there is no need to actually take down an
interface to reconfigure it (unless it is required by the kernel). For
example, to add another IP address or route to a statically configured
network interface, add the IP address to the interface definition, and do
another “ifup” operation. The server will try hard to update
only those settings that have changed. This applies to link-level options
such as the device MTU or the MAC address, and network-level settings, such
as addresses, routes, or even the address configuration mode (for example,
when moving from a static configuration to DHCP).
Things get tricky of course with virtual interfaces combining several real devices such as bridges or bonds. For bonded devices, it is not possible to change certain parameters while the device is up. Doing that will result in an error.
However, what should still work, is the act of adding or removing the child devices of a bond or bridge, or choosing a bond's primary interface.
wicked is designed to be extensible with shell scripts.
These extensions can be defined in the config.xml
file.
Currently, several classes of extensions are supported:
link configuration: these are scripts responsible for setting up a device's link layer according to the configuration provided by the client, and for tearing it down again.
address configuration: these are scripts responsible for managing a
device's address configuration. Usually address configuration and DHCP
are managed by wicked itself, but can be implemented
by means of extensions.
firewall extension: these scripts can apply firewall rules.
Typically, extensions have a start and a stop command, an optional “pid file”, and a set of environment variables that get passed to the script.
To illustrate how this is supposed to work, look at a firewall extension
defined in etc/server.xml:
<dbus-service interface="org.opensuse.Network.Firewall"> <action name="firewallUp" command="/etc/wicked/extensions/firewall up"/> <action name="firewallDown" command="/etc/wicked/extensions/firewall down"/> <!-- default environment for all calls to this extension script --> <putenv name="WICKED_OBJECT_PATH" value="$object-path"/> <putenv name="WICKED_INTERFACE_NAME" value="$property:name"/> <putenv name="WICKED_INTERFACE_INDEX" value="$property:index"/> </dbus-service>
The extension is attached to the
<dbus-service>
tag and defines commands to execute for the actions of this interface.
Further, the declaration can define and initialize environment variables
passed to the actions.
You can extend the handling of configuration files with scripts as well.
For example, DNS updates from leases are ultimately handled by the
extensions/resolver script, with behavior configured
in server.xml:
<system-updater name="resolver"> <action name="backup" command="/etc/wicked/extensions/resolver backup"/> <action name="restore" command="/etc/wicked/extensions/resolver restore"/> <action name="install" command="/etc/wicked/extensions/resolver install"/> <action name="remove" command="/etc/wicked/extensions/resolver remove"/> </system-updater>
When an update arrives in wickedd, the system
updater routines parse the lease and call the appropriate commands
(backup, install, etc.) in the
resolver script. This in turn configures the DNS settings using
/sbin/netconfig, or by manually writing
/etc/resolv.conf as a fallback.
This section provides an overview of the network configuration files and explains their purpose and the format used.
/etc/wicked/common.xml #
The /etc/wicked/common.xml file contains common
definitions that should be used by all applications. It is sourced/included
by the other configuration files in this directory. Although you can use
this file to enable debugging across all
wicked components, we recommend to use the file
/etc/wicked/local.xml for this purpose. After applying
maintenance updates you might lose your changes as the
/etc/wicked/common.xml might be overwritten. The
/etc/wicked/common.xml file includes the
/etc/wicked/local.xml in the default installation, thus
you typically do not need to modify the
/etc/wicked/common.xml.
In case you want to disable nanny by setting the
<use-nanny> to false, restart
the wickedd.service and then run the following command to
apply all configurations and policies:
tux >sudowicked ifup all
The wickedd, wicked, or
nanny programs try to read
/etc/wicked/common.xml if their own configuration
files do not exist.
/etc/wicked/server.xml #
The file /etc/wicked/server.xml is read by the
wickedd server process at start-up. The file stores
extensions to the /etc/wicked/common.xml. On top of
that this file configures handling of a resolver and receiving information
from addrconf supplicants, for example DHCP.
We recommend to add changes required to this file into a separate file
/etc/wicked/server-local.xml, that gets included by
/etc/wicked/server.xml. By using a separate file
you avoid overwriting of your changes during maintenance updates.
/etc/wicked/client.xml #
The /etc/wicked/client.xml is used by the
wicked command. The file specifies the location of a
script used when discovering devices managed by ibft and configures
locations of network interface configurations.
We recommend to add changes required to this file into a separate file
/etc/wicked/client-local.xml, that gets included by
/etc/wicked/server.xml. By using a separate file
you avoid overwriting of your changes during maintenance updates.
/etc/wicked/nanny.xml #
The /etc/wicked/nanny.xml configures types of link
layers. We recommend to add specific configuration into a separate file:
/etc/wicked/nanny-local.xml to avoid losing the changes
during maintenance updates.
/etc/sysconfig/network/ifcfg-* #These files contain the traditional configurations for network interfaces. In openSUSE prior to Leap, this was the only supported format besides iBFT firmware.
wicked and the ifcfg-* Files
wicked reads these files if you specify the
compat: prefix. According to the openSUSE Leap default
configuration in /etc/wicked/client.xml,
wicked tries these files before the XML configuration
files in /etc/wicked/ifconfig.
The --ifconfig switch is provided mostly for testing only.
If specified, default configuration sources defined in
/etc/wicked/ifconfig are not applied.
The ifcfg-* files include information such as the start
mode and the IP address. Possible parameters are described in the manual
page of ifup. Additionally, most variables from the
dhcp and wireless files can be
used in the ifcfg-* files if a general setting should
be used for only one interface. However, most of the
/etc/sysconfig/network/config variables are global and
cannot be overridden in ifcfg-files. For example,
NETCONFIG_* variables are global.
For configuring macvlan and
macvtab interfaces, see the
ifcfg-macvlan and
ifcfg-macvtap man pages. For example, for a macvlan
interface provide a ifcfg-macvlan0 with settings as
follows:
STARTMODE='auto' MACVLAN_DEVICE='eth0' #MACVLAN_MODE='vepa' #LLADDR=02:03:04:05:06:aa
For ifcfg.template, see
Section 13.6.2.6, “/etc/sysconfig/network/config, /etc/sysconfig/network/dhcp, and /etc/sysconfig/network/wireless”.
/etc/sysconfig/network/config, /etc/sysconfig/network/dhcp, and /etc/sysconfig/network/wireless #
The file config contains general settings for the
behavior of ifup, ifdown and
ifstatus. dhcp contains settings for
DHCP and wireless for wireless LAN cards. The variables
in all three configuration files are commented. Some variables from
/etc/sysconfig/network/config can also be used in
ifcfg-* files, where they are given a higher priority.
The /etc/sysconfig/network/ifcfg.template file lists
variables that can be specified in a per interface scope. However, most of
the /etc/sysconfig/network/config variables are global
and cannot be overridden in ifcfg-files. For example,
NETWORKMANAGER or
NETCONFIG_* variables are global.
In openSUSE prior to Leap, DHCPv6 used to work even on networks where IPv6 Router Advertisements (RAs) were not configured properly. Starting with openSUSE Leap, DHCPv6 will correctly require that at least one of the routers on the network sends out RAs that indicate that this network is managed by DHCPv6.
For networks where the router cannot be configured correctly, the ifcfg option allows the user to override this
behavior by specifying DHCLIENT6_MODE='managed' in the
ifcfg file.
You can also activate this workaround with a boot parameter in the
installation system:
ifcfg=eth0=dhcp6,DHCLIENT6_MODE=managed
/etc/sysconfig/network/routes and /etc/sysconfig/network/ifroute-* #
The static routing of TCP/IP packets is determined by the
/etc/sysconfig/network/routes and
/etc/sysconfig/network/ifroute-* files. All the static
routes required by the various system tasks can be specified in
/etc/sysconfig/network/routes: routes to a host, routes
to a host via a gateway and routes to a network. For each interface that
needs individual routing, define an additional configuration file:
/etc/sysconfig/network/ifroute-*. Replace the wild card
(*) with the name of the interface. The entries in the
routing configuration files look like this:
# Destination Gateway Netmask Interface Options
The route's destination is in the first column. This column may contain the
IP address of a network or host or, in the case of
reachable name servers, the fully qualified network or
host name. The network should be written in CIDR notation (address with the
associated routing prefix-length) such as 10.10.0.0/16 for IPv4 or fc00::/7
for IPv6 routes. The keyword default indicates that the
route is the default gateway in the same address family as the gateway. For
devices without a gateway use explicit 0.0.0.0/0 or ::/0 destinations.
The second column contains the default gateway or a gateway through which a host or network can be accessed.
The third column is deprecated; it used to contain the IPv4 netmask of the
destination. For IPv6 routes, the default route, or when using a
prefix-length (CIDR notation) in the first column, enter a dash
(-) here.
The fourth column contains the name of the interface. If you leave it empty
using a dash (-), it can cause unintended behavior in
/etc/sysconfig/network/routes. For more information,
see the routes man page.
An (optional) fifth column can be used to specify special options. For
details, see the routes man page.
# --- IPv4 routes in CIDR prefix notation: # Destination [Gateway] - Interface 127.0.0.0/8 - - lo 204.127.235.0/24 - - eth0 default 204.127.235.41 - eth0 207.68.156.51/32 207.68.145.45 - eth1 192.168.0.0/16 207.68.156.51 - eth1 # --- IPv4 routes in deprecated netmask notation" # Destination [Dummy/Gateway] Netmask Interface # 127.0.0.0 0.0.0.0 255.255.255.0 lo 204.127.235.0 0.0.0.0 255.255.255.0 eth0 default 204.127.235.41 0.0.0.0 eth0 207.68.156.51 207.68.145.45 255.255.255.255 eth1 192.168.0.0 207.68.156.51 255.255.0.0 eth1 # --- IPv6 routes are always using CIDR notation: # Destination [Gateway] - Interface 2001:DB8:100::/64 - - eth0 2001:DB8:100::/32 fe80::216:3eff:fe6d:c042 - eth0
/etc/resolv.conf #
The domain to which the host belongs is specified in
/etc/resolv.conf (keyword
search). Up to six domains with a total of 256
characters can be specified with the search option.
When resolving a name that is not fully qualified, an attempt is made to
generate one by attaching the individual search
entries. Up to 3 name servers can be specified with the
nameserver option, each on a line of its own.
Comments are preceded by hash mark or semicolon signs (#
or ;). As an example, see
Example 13.6, “/etc/resolv.conf”.
However, the /etc/resolv.conf should not be edited by
hand. Instead, it is generated by the netconfig script.
To define static DNS configuration without using YaST, edit the
appropriate variables manually in the
/etc/sysconfig/network/config file:
NETCONFIG_DNS_STATIC_SEARCHLIST
list of DNS domain names used for host name lookup
NETCONFIG_DNS_STATIC_SERVERS
list of name server IP addresses to use for host name lookup
NETCONFIG_DNS_FORWARDER
the name of the DNS forwarder that needs to be configured, for example
bind or resolver
NETCONFIG_DNS_RESOLVER_OPTIONS
arbitrary options that will be written to
/etc/resolv.conf, for example:
debug attempts:1 timeout:10
For more information, see the resolv.conf man page.
NETCONFIG_DNS_RESOLVER_SORTLIST
list of up to 10 items, for example:
130.155.160.0/255.255.240.0 130.155.0.0
For more information, see the resolv.conf man
page.
To disable DNS configuration using netconfig, set
NETCONFIG_DNS_POLICY=''. For more information about
netconfig, see the netconfig(8)
man page (man 8 netconfig).
/etc/resolv.conf ## Our domain search example.com # # We use dns.example.com (192.168.1.116) as nameserver nameserver 192.168.1.116
/sbin/netconfig #
netconfig is a modular tool to manage additional network
configuration settings. It merges statically defined settings with settings
provided by autoconfiguration mechanisms as DHCP or PPP according to a
predefined policy. The required changes are applied to the system by calling
the netconfig modules that are responsible for modifying a configuration
file and restarting a service or a similar action.
netconfig recognizes three main actions. The
netconfig modify and netconfig remove
commands are used by daemons such as DHCP or PPP to provide or remove
settings to netconfig. Only the netconfig update command
is available for the user:
modify
The netconfig modify command modifies the current
interface and service specific dynamic settings and updates the network
configuration. Netconfig reads settings from standard input or from a
file specified with the --lease-file
FILENAME option and internally stores
them until a system reboot (or the next modify or remove action). Already
existing settings for the same interface and service combination are
overwritten. The interface is specified by the -i
INTERFACE_NAME parameter. The service
is specified by the -s
SERVICE_NAME parameter.
remove
The netconfig remove command removes the dynamic
settings provided by a modificatory action for the specified interface
and service combination and updates the network configuration. The
interface is specified by the -i
INTERFACE_NAME parameter. The service
is specified by the -s
SERVICE_NAME parameter.
update
The netconfig update command updates the network
configuration using current settings. This is useful when the policy or
the static configuration has changed. Use the -m
MODULE_TYPE parameter, if you want to
update a specified service only (dns,
nis, or ntp).
The netconfig policy and the static configuration settings are defined
either manually or using YaST in the
/etc/sysconfig/network/config file. The dynamic
configuration settings provided by autoconfiguration tools such as DHCP or
PPP are delivered directly by these tools with the netconfig
modify and netconfig remove actions.
When NetworkManager is enabled, netconfig (in policy mode auto)
uses only NetworkManager settings, ignoring settings from any other interfaces
configured using the traditional ifup method. If NetworkManager does not provide any
setting, static settings are used as a fallback. A mixed usage of NetworkManager and
the wicked method is not supported.
For more information about netconfig, see man 8
netconfig.
/etc/hosts #
In this file, shown in Example 13.7, “/etc/hosts”, IP addresses
are assigned to host names. If no name server is implemented, all hosts to
which an IP connection will be set up must be listed here. For each host,
enter a line consisting of the IP address, the fully qualified host name,
and the host name into the file. The IP address must be at the beginning of
the line and the entries separated by blanks and tabs. Comments are always
preceded by the # sign.
/etc/hosts #127.0.0.1 localhost 192.168.2.100 jupiter.example.com jupiter 192.168.2.101 venus.example.com venus
/etc/networks #
Here, network names are converted to network addresses. The format is
similar to that of the hosts file, except the network
names precede the addresses. See Example 13.8, “/etc/networks”.
/etc/networks #loopback 127.0.0.0 localnet 192.168.0.0
/etc/host.conf #
Name resolution—the translation of host and network names via the
resolver library—is controlled by this file. This
file is only used for programs linked to libc4 or libc5. For current glibc
programs, refer to the settings in /etc/nsswitch.conf.
Each parameter must always be entered on a separate line. Comments are
preceded by a # sign.
Table 13.2, “Parameters for /etc/host.conf” shows the parameters available. A
sample /etc/host.conf is shown in
Example 13.9, “/etc/host.conf”.
|
order hosts, bind |
Specifies in which order the services are accessed for the name resolution. Available arguments are (separated by blank spaces or commas): |
|
hosts: searches the
| |
|
bind: accesses a name server | |
|
nis: uses NIS | |
|
multi on/off |
Defines if a host entered in |
|
nospoof on spoofalert on/off |
These parameters influence the name server spoofing but do not exert any influence on the network configuration. |
|
trim domainname |
The specified domain name is separated from the host name after host
name resolution (as long as the host name includes the domain name).
This option is useful only if names from the local domain are in the
|
/etc/host.conf ## We have named running order hosts bind # Allow multiple address multi on
/etc/nsswitch.conf #
The introduction of the GNU C Library 2.0 was accompanied by the
introduction of the Name Service Switch (NSS). Refer to
the nsswitch.conf(5) man page and The GNU
C Library Reference Manual for details.
The order for queries is defined in the file
/etc/nsswitch.conf. A sample
nsswitch.conf is shown in
Example 13.10, “/etc/nsswitch.conf”. Comments are preceded by
# signs. In this example, the entry under the
hosts database means that a request is sent to
/etc/hosts (files) via
DNS (see Chapter 19, The Domain Name System).
/etc/nsswitch.conf #passwd: compat group: compat hosts: files dns networks: files dns services: db files protocols: db files rpc: files ethers: files netmasks: files netgroup: files nis publickey: files bootparams: files automount: files nis aliases: files nis shadow: compat
The “databases” available over NSS are listed in Table 13.3, “Databases Available via /etc/nsswitch.conf”. The configuration options for NSS databases are listed in Table 13.4, “Configuration Options for NSS “Databases””.
|
|
Mail aliases implemented by |
|
|
Ethernet addresses. |
|
|
List of networks and their subnet masks. Only needed, if you use subnetting. |
|
|
User groups used by |
|
|
Host names and IP addresses, used by |
|
|
Valid host and user lists in the network for controlling access
permissions; see the |
|
|
Network names and addresses, used by |
|
|
Public and secret keys for Secure_RPC used by NFS and NIS+. |
|
|
User passwords, used by |
|
|
Network protocols, used by |
|
|
Remote procedure call names and addresses, used by
|
|
|
Network services, used by |
|
|
Shadow passwords of users, used by |
|
|
directly access files, for example, |
|
|
access via a database |
|
|
NIS, see also Chapter 3, Using NIS |
|
|
can only be used as an extension for |
|
|
can only be used as an extension for |
/etc/nscd.conf #
This file is used to configure nscd (name service cache daemon). See the
nscd(8) and nscd.conf(5)
man pages. By default, the system entries of passwd,
groups and hostsare cached by nscd. This is important for the
performance of directory services, like NIS and LDAP, because otherwise the
network connection needs to be used for every access to names, groups or
hosts.
If the caching for passwd is activated, it usually takes
about fifteen seconds until a newly added local user is recognized. Reduce
this waiting time by restarting nscd with:
tux >sudosystemctl restart nscd
/etc/HOSTNAME #
/etc/HOSTNAME contains the fully qualified host name
(FQHN). The fully qualified host name is the host name with the domain name
attached. This file must contain only one line (in which the host name is
set). It is read while the machine is booting.
Before you write your configuration to the configuration files, you can test
it. To set up a test configuration, use the ip command.
To test the connection, use the ping command.
The command ip changes the network configuration directly
without saving it in the configuration file. Unless you enter your
configuration in the correct configuration files, the changed network
configuration is lost on reboot.
ifconfig and route Are Obsolete
The ifconfig and route tools are
obsolete. Use ip instead. ifconfig,
for example, limits interface names to 9 characters.
ip #
ip is a tool to show and configure network devices,
routing, policy routing, and tunnels.
ip is a very complex tool. Its common syntax is
ip OPTIONS
OBJECT
COMMAND. You can work with the
following objects:
This object represents a network device.
This object represents the IP address of device.
This object represents an ARP or NDISC cache entry.
This object represents the routing table entry.
This object represents a rule in the routing policy database.
This object represents a multicast address.
This object represents a multicast routing cache entry.
This object represents a tunnel over IP.
If no command is given, the default command is used (usually
list).
Change the state of a device with the command ip link
set DEVICE_NAME
. For example, to deactivate device eth0, enter ip link
set eth0 down. To activate it again, use
ip link set eth0 up.
After activating a device, you can configure it. To set the IP address, use
ip addr
add IP_ADDRESS + dev
DEVICE_NAME. For example, to set the
address of the interface eth0 to 192.168.12.154/30 with standard broadcast
(option brd), enter ip
addr add 192.168.12.154/30 brd + dev eth0.
To have a working connection, you must also configure the default gateway.
To set a gateway for your system, enter ip route
add gateway_ip_address. To translate one IP
address to another, use nat: ip route add
nat ip_address via other_ip_address.
To display all devices, use ip link ls. To display the
running interfaces only, use ip link ls up. To print
interface statistics for a device, enter ip -s link
ls device_name. To view addresses of your
devices, enter ip addr. In the output of the ip
addr, also find information about MAC addresses of your devices.
To show all routes, use ip route show.
For more information about using ip, enter
ip help or see the
ip(8) man page. The help option
is also available for all ip subcommands. If, for
example, you need help for
ip addr, enter
ip addr help. Find the
ip manual in
/usr/share/doc/packages/iproute2/ip-cref.pdf.
The ping command is the standard tool for testing
whether a TCP/IP connection works. It uses the ICMP protocol to send a
small data packet, ECHO_REQUEST datagram, to the destination host,
requesting an immediate reply. If this works, ping
displays a message to that effect. This indicates that the network link is
functioning.
ping does more than only test the function of the
connection between two computers: it also provides some basic information
about the quality of the connection. In
Example 13.11, “Output of the Command ping”, you can see an example of the
ping output. The second-to-last line contains
information about the number of transmitted packets, packet loss, and total
time of ping running.
As the destination, you can use a host name or IP address, for example,
ping example.com or
ping 192.168.3.100. The program sends
packets until you press
Ctrl–C.
If you only need to check the functionality of the connection, you can
limit the number of the packets with the -c option. For
example to limit ping to three packets, enter
ping -c 3 example.com.
ping -c 3 example.com PING example.com (192.168.3.100) 56(84) bytes of data. 64 bytes from example.com (192.168.3.100): icmp_seq=1 ttl=49 time=188 ms 64 bytes from example.com (192.168.3.100): icmp_seq=2 ttl=49 time=184 ms 64 bytes from example.com (192.168.3.100): icmp_seq=3 ttl=49 time=183 ms --- example.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2007ms rtt min/avg/max/mdev = 183.417/185.447/188.259/2.052 ms
The default interval between two packets is one second. To change the
interval, ping provides the option -i. For example, to
increase the ping interval to ten seconds, enter
ping -i 10 example.com.
In a system with multiple network devices, it is sometimes useful to send
the ping through a specific interface address. To do so, use the
-I option with the name of the selected device, for
example, ping -I wlan1
example.com.
For more options and information about using ping, enter
ping -h or see the
ping (8) man page.
For IPv6 addresses use the ping6 command. Note, to ping
link-local addresses, you must specify the interface with
-I. The following command works, if the address is
reachable via eth1:
ping6 -I eth1 fe80::117:21ff:feda:a425
Apart from the configuration files described above, there are also systemd
unit files and various scripts that load the network services while the
machine is booting. These are started when the system is switched to the
multi-user.target target. Some of these unit files
and scripts are described in Some Unit Files and Start-Up Scripts for Network Programs. For
more information about systemd, see
Chapter 10, The systemd Daemon and for more information about the
systemd targets, see the man page of
systemd.special (man
systemd.special).
network.target
network.target is the systemd target for
networking, but its mean depends on the settings provided by the system
administrator.
For more information, see http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/.
multi-user.target
multi-user.target is the systemd target for a
multiuser system with all required network services.
rpcbind
Starts the rpcbind utility that converts RPC program numbers to universal addresses. It is needed for RPC services, such as an NFS server.
ypserv
Starts the NIS server.
ypbind
Starts the NIS client.
/etc/init.d/nfsserver
Starts the NFS server.
/etc/init.d/postfix
Controls the postfix process.
A router is a networking device that delivers and receives data (network packets) to or from more than one network back and forth. You often use a router to connect your local network to the remote network (Internet) or to connect local network segments. With openSUSE Leap you can build a router with features such as NAT (Network Address Translation) or advanced firewalling.
The following are basic steps to turn openSUSE Leap into a router.
Enable forwarding, for example in
/etc/sysctl.d/50-router.conf
net.ipv4.conf.all.forwarding = 1 net.ipv6.conf.all.forwarding = 1
Then provide a static IPv4 and IPv6 IP setup for the interfaces. Enabling forwarding disables several mechanisms, for example IPv6 does not accept an IPv6 RA (router advertisement) anymore, which also prevents the creation of a default route.
In many situations (for example, when you can reach the same network via more than one interface, or when VPN usually is used and already on “normal multi-home hosts”), you must disable the IPv4 reverse path filter (this feature does not currently exist for IPv6):
net.ipv4.conf.all.rp_filter = 0
You can also filter with firewall settings instead.
To accept an IPv6 RA (from the router on an external, uplink, or ISP interface) and create a default (or also a more specific) IPv6 route again, set:
net.ipv6.conf.${ifname}.accept_ra = 2
net.ipv6.conf.${ifname}.autoconf = 0
(Note: “eth0.42” needs to be written as
eth0/42 in a dot-separated sysfs path.)
More router behavior and forwarding dependencies are described in https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt.
To provide IPv6 on your internal (DMZ) interfaces, and announce yourself as
an IPv6 router and “autoconf networks” to the clients, install
and configure radvd in
/etc/radvd.conf, for example:
interface eth0
{
IgnoreIfMissing on; # do not fail if interface missed
AdvSendAdvert on; # enable sending RAs
AdvManagedFlag on; # IPv6 addresses managed via DHCPv6
AdvOtherConfigFlag on; # DNS, NTP... only via DHCPv6
AdvDefaultLifetime 3600; # client default route lifetime of 1 hour
prefix 2001:db8:0:1::/64 # (/64 is default and required for autoconf)
{
AdvAutonomous off; # Disable address autoconf (DHCPv6 only)
AdvValidLifetime 3600; # prefix (autoconf addr) is valid 1 h
AdvPreferredLifetime 1800; # prefix (autoconf addr) is prefered 1/2 h
}
}
Lastly configure the firewall. In SuSEfirewall2, you need to set
FW_ROUTE="yes" (otherwise it will also reset forwarding
sysctl again) and define the interfaces in the FW_DEV_INT,
FW_DEV_EXT (and FW_DEV_DMZ) zone
variables as needed, perhaps also FW_MASQUERADE="yes" and
FW_MASQ_DEV.
For some systems, there is a desire to implement network connections that comply to more than the standard data security or availability requirements of a typical Ethernet device. In these cases, several Ethernet devices can be aggregated to a single bonding device.
The configuration of the bonding device is done by means of bonding module
options. The behavior is mainly affected by the mode of the bonding device.
By default, this is mode=active-backup which means
that a different slave device will become active if the active slave fails.
Using bonding devices is only of interest for machines where you have multiple real network cards available. In most configurations, this means that you should use the bonding configuration only in Dom0. Only if you have multiple network cards assigned to a VM Guest system it may also be useful to set up the bond in a VM Guest.
To configure a bonding device, use the following procedure:
Run › › .
Use and change the to . Proceed with .
Select how to assign the IP address to the bonding device. Three methods are at your disposal:
No IP Address
Dynamic Address (with DHCP or Zeroconf)
Statically assigned IP Address
Use the method that is appropriate for your environment.
In the tab, select the Ethernet devices that should be included into the bond by activating the related check box.
Edit the . The modes that are available for configuration are the following:
balance-rr
active-backup
balance-xor
broadcast
802.3ad
802.3ad is the standardized LACP “IEEE 802.3ad
Dynamic link aggregation” mode.
balance-tlb
balance-alb
Make sure that the parameter miimon=100 is added to the
. Without this parameter, the data
integrity is not checked regularly.
Click and leave YaST with to create the device.
All modes, and many more options are explained in detail in the
found at
/usr/src/linux/Documentation/networking/bonding.txt
after installing the package kernel-source.
In specific network environments (such as High Availability), there are cases when you need to replace a bonding slave interface with another one. The reason may be a constantly failing network device. The solution is to set up hotplugging of bonding slaves.
The bond is configured as usual (according to man 5
ifcfg-bonding), for example:
ifcfg-bond0
STARTMODE='auto' # or 'onboot'
BOOTPROTO='static'
IPADDR='192.168.0.1/24'
BONDING_MASTER='yes'
BONDING_SLAVE_0='eth0'
BONDING_SLAVE_1='eth1'
BONDING_MODULE_OPTS='mode=active-backup miimon=100'
The slaves are specified with STARTMODE=hotplug and
BOOTPROTO=none:
ifcfg-eth0
STARTMODE='hotplug'
BOOTPROTO='none'
ifcfg-eth1
STARTMODE='hotplug'
BOOTPROTO='none'
BOOTPROTO=none uses the ethtool
options (when provided), but does not set the link up on ifup
eth0. The reason is that the slave interface is controlled by the
bond master.
STARTMODE=hotplug causes the slave interface to join the
bond automatically when it is available.
The udev rules in
/etc/udev/rules.d/70-persistent-net.rules need to be
changed to match the device by bus ID (udev KERNELS
keyword equal to "SysFS BusID" as visible in hwinfo
--netcard) instead of by MAC address. This allows replacement of
defective hardware (a network card in the same slot but with a different
MAC) and prevents confusion when the bond changes the MAC address of all its
slaves.
For example:
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
KERNELS=="0000:00:19.0", ATTR{dev_id}=="0x0", ATTR{type}=="1",
KERNEL=="eth*", NAME="eth0"
At boot time, the systemd network.service does not
wait for the hotplug slaves, but for the bond to become ready, which
requires at least one available slave. When one of the slave interfaces gets
removed (unbind from NIC driver, rmmod of the NIC driver
or true PCI hotplug remove) from the system, the kernel removes it from the
bond automatically. When a new card is added to the system (replacement of
the hardware in the slot), udev renames it using
the bus-based persistent name rule to the name of the slave, and calls
ifup for it. The ifup call
automatically joins it into the bond.
The term “link aggregation” is the general term which describes combining (or aggregating) a network connection to provide a logical layer. Sometimes you find the terms “channel teaming”, “Ethernet bonding”, “port truncating”, etc. which are synonyms and refer to the same concept.
This concept is widely known as “bonding” and was originally integrated into the Linux kernel (see Section 13.8, “Setting Up Bonding Devices” for the original implementation). The term Network Teaming is used to refer to the new implementation of this concept.
The main difference between bonding and Network Teaming is that teaming supplies a set of small kernel modules responsible for providing an interface for teamd instances. Everything else is handled in user space. This is different from the original bonding implementation which contains all of its functionality exclusively in the kernel. For a comparison refer to Table 13.5, “Feature Comparison between Bonding and Team”.
| Feature | Bonding | Team |
|---|---|---|
| Source: http://libteam.org/files/teamdev.pp.pdf | ||
| broadcast, round-robin TX policy | yes | yes |
| active-backup TX policy | yes | yes |
| LACP (802.3ad) support | yes | yes |
| hash-based TX policy | yes | yes |
| user can set hash function | no | yes |
| TX load-balancing support (TLB) | yes | yes |
| TX load-balancing support for LACP | no | yes |
| Ethtool link monitoring | yes | yes |
| ARP link monitoring | yes | yes |
| NS/NA (IPV6) link monitoring | no | yes |
| RCU locking on TX/RX paths | no | yes |
| port prio and stickiness | no | yes |
| separate per-port link monitoring setup | no | yes |
| multiple link monitoring setup | limited | yes |
| VLAN support | yes | yes |
| multiple device stacking | yes | yes |
Both implementations, bonding and Network Teaming, can be used in parallel. Network Teaming is an alternative to the existing bonding implementation. It does not replace bonding.
Network Teaming can be used for different use cases. The two most important use cases are explained later and involve:
Load balancing between different network devices.
Failover from one network device to another in case one of the devices should fail.
Currently, there is no YaST module to support creating a teaming device. You need to configure Network Teaming manually. The general procedure is shown below which can be applied for all your Network Teaming configurations:
Make sure you have all the necessary packages installed. Install the packages libteam-tools, libteamdctl0, and python-libteam.
Create a configuration file under
/etc/sysconfig/network/. Usually it will be
ifcfg-team0. If you need more than one Network Teaming
device, give them ascending numbers.
This configuration file contains several variables which are explained in
the man pages (see man ifcfg and man
ifcfg-team). An example configuration can be found in your
system in the file /etc/sysconfig/network/ifcfg.template.
Remove the configuration files of the interfaces which will be used for the
teaming device (usually ifcfg-eth0 and
ifcfg-eth1).
It is recommended to make a backup and remove both files. Wicked will re-create the configuration files with the necessary parameters for teaming.
Optionally, check if everything is included in Wicked's configuration file:
tux >sudowicked show-config
Start the Network Teaming device team0:
tux >sudowicked ifup all team0
In case you need additional debug information, use the option
--debug all after the all subcommand.
Check the status of the Network Teaming device. This can be done by the following commands:
Get the state of the teamd instance from Wicked:
tux >sudowicked ifstatus --verbose team0
Get the state of the entire instance:
tux >sudoteamdctl team0 state
Get the systemd state of the teamd instance:
tux >sudosystemctl status teamd@team0
Each of them shows a slightly different view depending on your needs.
In case you need to change something in the
ifcfg-team0 file afterward, reload its configuration
with:
tux >sudowicked ifreload team0
Do not use systemctl for starting or
stopping the teaming device! Instead, use the wicked
command as shown above.
To completely remove the team device, use this procedure:
Stop the Network Teaming device team0:
tux >sudowicked ifdown team0
Rename the file /etc/sysconfig/network/ifcfg-team0 to /etc/sysconfig/network/.ifcfg-team0.
Inserting a dot in front of the file name makes it
“invisible” for wicked. If you really do not need the
configuration anymore, you can also remove the file.
Reload the configuration:
tux >sudowicked ifreload all
Loadbalancing is used to improve bandwidth. Use the following configuration
file to create a Network Teaming device with loadbalancing capabilities. Proceed
with Procedure 13.1, “General Procedure” to set up the device. Check the
output with teamdctl.
STARTMODE=auto 1 BOOTPROTO=static 2 IPADDRESS="192.168.1.1/24" 2 IPADDR6="fd00:deca:fbad:50::1/64" 2 TEAM_RUNNER="loadbalance" 3 TEAM_LB_TX_HASH="ipv4,ipv6,eth,vlan" TEAM_LB_TX_BALANCER_NAME="basic" TEAM_LB_TX_BALANCER_INTERVAL="100" TEAM_PORT_DEVICE_0="eth0" 4 TEAM_PORT_DEVICE_1="eth1" 4 TEAM_LW_NAME="ethtool" 5 TEAM_LW_ETHTOOL_DELAY_UP="10" 6 TEAM_LW_ETHTOOL_DELAY_DOWN="10" 6
Controls the start of the teaming device. The value of
In case you need to control the device yourself (and prevent it from
starting automatically), set | |
Sets a static IP address (here
If the Network Teaming device should use a dynamic IP address, set
| |
Sets | |
Specifies one or more devices which should be aggregated to create the Network Teaming device. | |
Defines a link watcher to monitor the state of subordinate devices. The
default value
If you need a higher confidence in the connection, use the
| |
Defines the delay in milliseconds between the link coming up (or down) and the runner being notified. |
Failover is used to ensure high availability of a critical Network Teaming device by involving a parallel backup network device. The backup network device is running all the time and takes over if and when the main device fails.
Use the following configuration file to create a Network Teaming device with
failover capabilities. Proceed with Procedure 13.1, “General Procedure” to
set up the device. Check the output with teamdctl.
STARTMODE=auto 1 BOOTPROTO=static 2 IPADDR="192.168.1.2/24" 2 IPADDR6="fd00:deca:fbad:50::2/64" 2 TEAM_RUNNER=activebackup 3 TEAM_PORT_DEVICE_0="eth0" 4 TEAM_PORT_DEVICE_1="eth1" 4 TEAM_LW_NAME=ethtool 5 TEAM_LW_ETHTOOL_DELAY_UP="10" 6 TEAM_LW_ETHTOOL_DELAY_DOWN="10" 6
Controls the start of the teaming device. The value of
In case you need to control the device yourself (and prevent it from
starting automatically), set | |
Sets a static IP address (here
If the Network Teaming device should use a dynamic IP address, set
| |
Sets | |
Specifies one or more devices which should be aggregated to create the Network Teaming device. | |
Defines a link watcher to monitor the state of subordinate devices. The
default value
If you need a higher confidence in the connection, use the
| |
Defines the delay in milliseconds between the link coming up (or down) and the runner being notified. |
VLAN is an abbreviation of Virtual Local Area Network. It allows the running of multiple logical (virtual) ethernets over one single physical ethernet. It logically splits the network into different broadcast domains so that packets are only switched between ports that are designated for the same VLAN.
The following use case creates two static VLANs on top of a team device:
vlan0,
bound to the IP address 192.168.10.1
vlan1,
bound to the IP address 192.168.20.1
Proceed as follows:
Enable the VLAN tags on your switch. If you want to use loadbalancing for your team device, your switch needs to be capable of Link Aggregation Control Protocol (LACP) (802.3ad). Consult your hardware manual about the details.
Decide if you want to use loadbalancing or failover for your team device. Set up your team device as described in Section 13.9.1, “Use Case: Loadbalancing with Network Teaming” or Section 13.9.2, “Use Case: Failover with Network Teaming”.
In /etc/sysconfig/network create a file
ifcfg-vlan0 with the following content:
STARTMODE="auto" BOOTPROTO="static" 1 IPADDR='192.168.10.1/24' 2 ETHERDEVICE="team0" 3 VLAN_ID="0" 4 VLAN='yes'
Defines a fixed IP address, specified in | |
Defines the IP address, here with its netmask. | |
Contains the real interface to use for the VLAN interface, here our
team device ( | |
Specifies a unique ID for the VLAN. Preferably, the file name and the
|
Copy the file /etc/sysconfig/network/ifcfg-vlan0 to
/etc/sysconfig/network/ifcfg-vlan1 and change the
following values:
IPADDR from 192.168.10.1/24 to 192.168.20.1/24.
VLAN_ID from 0 to
1.
Start the two VLANs:
root #wickedifup vlan0 vlan1
Check the output of ifconfig:
root #ifconfig-a [...] vlan0 Link encap:Ethernet HWaddr 08:00:27:DC:43:98 inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedc:4398/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:816 (816.0 b) vlan1 Link encap:Ethernet HWaddr 08:00:27:DC:43:98 inet addr:192.168.20.1 Bcast:192.168.20.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fedc:4398/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:816 (816.0 b)
Software-defined networking (SDN) means separating the system that controls where traffic is sent (the control plane) from the underlying system that forwards traffic to the selected destination (the data plane, also called the forwarding plane). This means that the functions previously fulfilled by a single, usually inflexible switch, can now be separated between a switch (data plane) and its controller (control plane). In this model, the controller is programmable and can be very flexible and adapt quickly to changing network conditions.
Open vSwitch is software that implements a distributed virtual multilayer switch that is compatible with the OpenFlow protocol. OpenFlow allows a controller application to modify the configuration of a switch. OpenFlow is layered onto the TCP protocol and is implemented in a range of hardware and software. A single controller can thus drive multiple, very different switches.
Software-defined networking with Open vSwitch brings several advantages with it, especially when you used together with virtual machines:
Networking states can be identified easily.
Networks and their live state can be moved from one host to another.
Network dynamics are traceable and external software can be enabled to respond to them.
You can apply and manipulate tags in network packets to identify which machine they are coming from or going to and maintain other networking context. Tagging rules can be configured and migrated.
Open vSwitch implements the GRE protocol (Generic Routing Encapsulation). This allows you, for example, to connect private VM networks to each other.
Open vSwitch can be used on its own, but is designed to integrate with networking hardware and can control hardware switches.
Install Open vSwitch and supplementary packages:
root #zypperinstall openvswitch openvswitch-switch
If you plan to use Open vSwitch together with the KVM hypervisor, additionally install tunctl . If you plan to use Open vSwitch together with the Xen hypervisor, additionally install openvswitch-kmp-xen .
Enable the Open vSwitch service:
root #systemctlenable openvswitch
Either restart the computer or use systemctl to start
the Open vSwitch service immediately:
root #systemctlstart openvswitch
To check whether Open vSwitch was activated correctly, use:
root #systemctlstatus openvswitch
Open vSwitch consists of several components. Among them are a kernel module and various user space components. The kernel module is used for accelerating the data path, but is not necessary for a minimal Open vSwitch installation.
The central executables of Open vSwitch are its two daemons. When you start the
openvswitch service, you are indirectly starting
them.
The main Open vSwitch daemon (ovs-vswitchd) provides the
implementation of a switch. The Open vSwitch database daemon
(ovsdb-server) serves the database that stores the
configuration and state of Open vSwitch.
Open vSwitch also comes with several utilities that help you work with it. The following list is not exhaustive, but instead describes important commands only.
ovsdb-tool
Create, upgrade, compact, and query Open vSwitch databases. Do transactions on Open vSwitch databases.
ovs-appctl
Configure a running ovs-vswitchd or
ovsdb-server daemon.
ovs-dpctl, ovs-dpctl-top
Create, modify, visualize, and delete data paths. Using this tool can
interfere with ovs-vswitchd also performing data path
management. Therefore, it is often used for diagnostics only.
ovs-dpctl-top creates a top-like
visualization for data paths.
ovs-ofctl
Manage any switches adhering to the
OpenFlow protocol.
ovs-ofctl is not limited to interacting with Open vSwitch.
ovs-vsctl
Provides a high-level interface to the configuration database. It can be
used to query and modify the database. In effect, it shows the status of
ovs-vswitchd and can be used to configure it.
The following example configuration uses the Wicked network service that is used by default on openSUSE Leap. To learn more about Wicked, see Section 13.6, “Configuring a Network Connection Manually”.
When you have installed and started Open vSwitch, proceed as follows:
To configure a bridge for use by your virtual machine, create a file with content like this:
STARTMODE='auto'1 BOOTPROTO='dhcp'2 OVS_BRIDGE='yes'3 OVS_BRIDGE_PORT_DEVICE_1='eth0'4
Set up the bridge automatically when the network service is started. | |
The protocol to use for configuring the IP address. | |
Mark the configuration as an Open vSwitch bridge. | |
Choose which device/devices should be added to the bridge. To add more devices, append additional lines for each of them to the file: OVS_BRIDGE_PORT_DEVICE_SUFFIX='DEVICE' The SUFFIX can be any alphanumeric string. However, to avoid overwriting a previous definition, make sure the SUFFIX of each device is unique. |
Save the file in the directory /etc/sysconfig/network
under the name ifcfg-br0. Instead of
br0, you can use any name you want. However,
the file name needs to begin with ifcfg-.
To learn about further options, refer to the man pages of
ifcfg (man 5 ifcfg) and
ifcfg-ovs-bridge (man 5
ifcfg-ovs-bridge).
Now start the bridge:
root #wickedifup br0
When Wicked is done, it should output the name of the bridge and next to
it the state up.
After having created the bridge as described in Section 13.10.4, “Creating a Bridge with Open vSwitch”, you can use Open vSwitch to manage the network access of virtual machines created with KVM/QEMU.
To be able to best use the capabilities of Wicked, make some further
changes to the bridge configured before. Open the previously created
/etc/sysconfig/network/ifcfg-br0 and append a line
for another port device:
OVS_BRIDGE_PORT_DEVICE_2='tap0'
Additionally, set BOOTPROTO to none.
The file should now look like this:
STARTMODE='auto' BOOTPROTO='none' OVS_BRIDGE='yes' OVS_BRIDGE_PORT_DEVICE_1='eth0' OVS_BRIDGE_PORT_DEVICE_2='tap0'
The new port device tap0 will be configured in the next step.
Now add a configuration file for the tap0 device:
STARTMODE='auto' BOOTPROTO='none' TUNNEL='tap'
Save the file in the directory /etc/sysconfig/network
under the name ifcfg-tap0.
To be able to use this tap device from a virtual machine started as a
user who is not root, append:
TUNNEL_SET_OWNER=USER_NAME
To allow access for an entire group, append:
TUNNEL_SET_GROUP=GROUP_NAME
Finally, open the configuration for the device defined as the first
OVS_BRIDGE_PORT_DEVICE. If you did not change the name,
that should be eth0. Therefore, open
/etc/sysconfig/network/ifcfg-eth0 and make sure that
the following options are set:
STARTMODE='auto' BOOTPROTO='none'
If the file does not exist yet, create it.
Restart the bridge interface using Wicked:
root #wickedifreload br0
This will also trigger a reload of the newly defined bridge port devices.
To start a virtual machine, use, for example:
root #qemu-kvm\ -drive file=/PATH/TO/DISK-IMAGE1 \ -m 512 -net nic,vlan=0,macaddr=00:11:22:EE:EE:EE \ -net tap,ifname=tap0,script=no,downscript=no2
For further information on the usage of KVM/QEMU, see Part V, “Managing Virtual Machines with QEMU”.
libvirt #
After having created the bridge as described before in
Section 13.10.4, “Creating a Bridge with Open vSwitch”, you can add the bridge to an existing
virtual machine managed with libvirt. Since libvirt has some support for
Open vSwitch bridges already, you can use the bridge created in
Section 13.10.4, “Creating a Bridge with Open vSwitch” without further changes to the networking
configuration.
Open the domain XML file for the intended virtual machine:
root #virshedit VM_NAME
Replace VM_NAME with the name of the desired virtual machine. This will open your default text editor.
Find the networking section of the document by looking for a section
starting with <interface type="..."> and ending
in </interface>.
Replace the existing section with a networking section that looks somewhat like this:
<interface type='bridge'> <source bridge='br0'/> <virtualport type='openvswitch'/> </interface>
virsh iface-* and Virtual Machine Manager with Open vSwitch
At the moment, the Open vSwitch compatibility of libvirt is not exposed
through the virsh iface-* tools and Virtual Machine Manager. If you use
any of these tools, your configuration can break.
You can now start or restart the virtual machine as usual.
For further information on the usage of libvirt, see
Part II, “Managing Virtual Machines with libvirt”.
The documentation section of the Open vSwitch project Web site
Whitepaper by the Open Networking Foundation about software-defined networking and the OpenFlow protocol
UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.
UEFI is becoming more and more available on PC systems and thus is replacing the traditional PC-BIOS. UEFI, for example, properly supports 64-bit systems and offers secure booting (“Secure Boot”, firmware version 2.3.1c or better required), which is one of its most important features. Lastly, with UEFI a standard firmware will become available on all x86 platforms.
UEFI additionally offers the following advantages:
Booting from large disks (over 2 TiB) with a GUID Partition Table (GPT).
CPU-independent architecture and drivers.
Flexible pre-OS environment with network capabilities.
CSM (Compatibility Support Module) to support booting legacy operating systems via a PC-BIOS-like emulation.
For more information, see http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface. The following sections are not meant as a general UEFI overview; these are only hints about how some features are implemented in openSUSE Leap.
In the world of UEFI, securing the bootstrapping process means establishing a chain of trust. The “platform” is the root of this chain of trust; in the context of openSUSE Leap, the mainboard and the on-board firmware could be considered the “platform”. In other words, it is the hardware vendor, and the chain of trust flows from that hardware vendor to the component manufacturers, the OS vendors, etc.
The trust is expressed via public key cryptography. The hardware vendor puts a so-called Platform Key (PK) into the firmware, representing the root of trust. The trust relationship with operating system vendors and others is documented by signing their keys with the Platform Key.
Finally, security is established by requiring that no code will be executed by the firmware unless it has been signed by one of these “trusted” keys—be it an OS boot loader, some driver located in the flash memory of some PCI Express card or on disk, or be it an update of the firmware itself.
To use Secure Boot, you need to have your OS loader signed with a key trusted by the firmware, and you need the OS loader to verify that the kernel it loads can be trusted.
Key Exchange Keys (KEK) can be added to the UEFI key database. This way, you can use other certificates, as long as they are signed with the private part of the PK.
Microsoft’s Key Exchange Key (KEK) is installed by default.
The Secure Boot feature is enabled by default on UEFI/x86_64 installations. You can find the option in the tab of the dialog. It supports booting when the secure boot is activated in the firmware, while making it possible to boot when it is deactivated.
The Secure Boot feature requires that a GUID Partitioning Table (GPT) replaces the old partitioning with a Master Boot Record (MBR). If YaST detects EFI mode during the installation, it will try to create a GPT partition. UEFI expects to find the EFI programs on a FAT-formatted EFI System Partition (ESP).
Supporting UEFI Secure Boot essentially requires having a boot loader with a digital signature that the firmware recognizes as a trusted key. That key is trusted by the firmware a priori, without requiring any manual intervention.
There are two ways of getting there. One is to work with hardware vendors to have them endorse a SUSE key, which SUSE then signs the boot loader with. The other way is to go through Microsoft’s Windows Logo Certification program to have the boot loader certified and have Microsoft recognize the SUSE signing key (that is, have it signed with their KEK). By now, SUSE got the loader signed by UEFI Signing Service (that is Microsoft in this case).
At the implementation layer, SUSE uses the shim
loader which is installed by default. It is a smart solution that avoids
legal issues, and simplifies the certification and signing step
considerably. The shim loader’s job is to load a
boot loader such as GRUB 2 and verify it; this boot loader in
turn will load kernels signed by a SUSE key only.
There are two types of trusted users:
First, those who hold the keys. The Platform Key (PK) allows almost everything. The Key Exchange Key (KEK) allows all a PK can except changing the PK.
Second, anyone with physical access to the machine. A user with physical access can reboot the machine, and configure UEFI.
UEFI offers two types of variables to fulfill the needs of those users:
The first is the so-called “Authenticated Variables”, which can be updated from both within the boot process (the so-called Boot Services Environment) and the running OS. This can be done only when the new value of the variable is signed with the same key that the old value of the variable was signed with. And they can only be appended to or changed to a value with a higher serial number.
The second is the so-called “Boot Services Only Variables”.
These variables are accessible to any code that runs during the boot
process. After the boot process ends and before the OS starts, the boot
loader must call the ExitBootServices call. After
that, these variables are no longer accessible, and the OS cannot touch
them.
The various UEFI key lists are of the first type, as this allows online updating, adding, and blacklisting of keys, drivers, and firmware fingerprints. It is the second type of variable, the “Boot Services Only Variable”, that helps to implement Secure Boot in a secure and open source-friendly manner, and thus compatible with GPLv3.
SUSE starts with shim—a small and simple EFI
boot loader signed by SUSE and Microsoft.
This allows shim to load and execute.
shim then goes on to verify that the boot loader
it wants to load is trusted.
In a default situation shim will use an
independent SUSE certificate embedded in its body. In addition,
shim will allow to “enroll”
additional keys, overriding the default SUSE key. In the following, we call
them “Machine Owner Keys” or MOKs for short.
Next the boot loader will verify and then boot the kernel, and the kernel will do the same on the modules.
If the user (“machine owner”) wants to replace any components
of the boot process, Machine Owner Keys (MOKs) are to be used. The
mokutils tool will help with signing components
and managing MOKs.
The enrollment process begins with rebooting the machine and interrupting
the boot process (for example, pressing a key) when
shim loads. shim will
then go into enrollment mode, allowing the user to replace the default SUSE
key with keys from a file on the boot partition. If the user chooses to do
so, shim will then calculate a hash of that file
and put the result in a “Boot Services Only” variable. This
allows shim to detect any change of the file made
outside of Boot Services and thus avoid tampering with the list of
user-approved MOKs.
All of this happens during boot time—only verified code is executing now. Therefore, only a user present at the console can use the machine owner's set of keys. It cannot be malware or a hacker with remote access to the OS because hackers or malware can only change the file, but not the hash stored in the “Boot Services Only” variable.
The boot loader, after having been loaded and verified by
shim, will call back to
shim when it wants to verify the kernel—to
avoid duplication of the verification code. Shim
will use the same list of MOKs for this and tell the boot loader whether it
can load the kernel.
This way, you can install your own kernel or boot loader. It is only
necessary to install a new set of keys and authorize them by being
physically present during the first reboot. Because MOKs are a list and not
not a single MOK, you can make shim trust keys
from several vendors, allowing dual- and multi-boot from the boot loader.
The following is based on http://en.opensuse.org/openSUSE:UEFI#Booting_a_custom_kernel.
Secure Boot does not prevent you from using a self-compiled kernel. You must sign it with your own certificate and make that certificate known to the firmware or MOK.
Create a custom X.509 key and certificate used for signing:
openssl req -new -x509 -newkey rsa:2048 -keyout key.asc \ -out cert.pem -nodes -days 666 -subj "/CN=$USER/"
For more information about creating certificates, see http://en.opensuse.org/openSUSE:UEFI_Image_File_Sign_Tools#Create_Your_Own_Certificate.
Package the key and the certificate as a PKCS#12 structure:
tux > openssl pkcs12 -export -inkey key.asc -in cert.pem \
-name kernel_cert -out cert.p12
Generate an NSS database for use with pesign:
tux > certutil -d . -NImport the key and the certificate contained in PKCS#12 into the NSS database:
tux > pk12util -d . -i cert.p12
“Bless” the kernel with the new signature using
pesign:
tux > pesign -n . -c kernel_cert -i arch/x86/boot/bzImage \
-o vmlinuz.signed -sList the signatures on the kernel image:
tux > pesign -n . -S -i vmlinuz.signed
At that point, you can install the kernel in /boot
as usual. Because the kernel now has a custom signature the certificate
used for signing needs to be imported into the UEFI firmware or MOK.
Convert the certificate to the DER format for import into the firmware or MOK:
tux > openssl x509 -in cert.pem -outform der -out cert.derCopy the certificate to the ESP for easier access:
tux >sudocp cert.der /boot/efi/
Use mokutil to launch the MOK list automatically.
Import the certificate to MOK:
tux > mokutil --root-pw --import cert.der
The --root-pw option enables usage of the root
user directly.
Check the list of certificates that are prepared to be enrolled:
tux > mokutil --list-new
Reboot the system; shim should launch
MokManager. You need to enter the root password to confirm the
import of the certificate to the MOK list.
Check if the newly imported key was enrolled:
tux > mokutil --list-enrolledAlternatively, this is the procedure if you want to launch MOK manually:
Reboot
In the GRUB 2 menu press the 'c' key.
Type:
chainloader $efibootdir/MokManager.efi boot
Select .
Navigate to the cert.der file and press
Enter.
Follow the instructions to enroll the key. Normally this should be
pressing '0' and then 'y' to
confirm.
Alternatively, the firmware menu may provide ways to add a new key to the Signature Database.
There is no support for adding non-inbox drivers (that is, drivers that do not come with openSUSE Leap) during installation with Secure Boot enabled. The signing key used for SolidDriver/PLDP is not trusted by default.
It is possible to install third party drivers during installation with Secure Boot enabled in two different ways. In both cases:
Add the needed keys to the firmware database via firmware/system management tools before the installation. This option depends on the specific hardware you are using. Consult your hardware vendor for more information.
Use a bootable driver ISO from https://drivers.suse.com/ or your hardware vendor to enroll the needed keys in the MOK list at first boot.
To use the bootable driver ISO to enroll the driver keys to the MOK list, follow these steps:
Burn the ISO image above to an empty CD/DVD medium.
Start the installation using the new CD/DVD medium, having the standard installation media at hand or a URL to a network installation server.
If doing a network installation, enter the URL of the network
installation source on the boot command line using the
install= option.
If doing installation from optical media, the installer will first boot from the driver kit and then ask to insert the first installation disk of the product.
An initrd containing updated drivers will be used for installation.
For more information, see https://drivers.suse.com/doc/Usage/Secure_Boot_Certificate.html.
When booting in Secure Boot mode, the following features apply:
Installation to UEFI default boot loader location, a mechanism to keep or restore the EFI boot entry.
Reboot via UEFI.
Xen hypervisor will boot with UEFI when there is no legacy BIOS to fall back to.
UEFI IPv6 PXE boot support.
UEFI videomode support, the kernel can retrieve video mode from UEFI to configure KMS mode with the same parameters.
UEFI booting from USB devices is supported.
When booting in Secure Boot mode, the following limitations apply:
To ensure that Secure Boot cannot be easily circumvented, some kernel features are disabled when running under Secure Boot.
Boot loader, kernel, and kernel modules must be signed.
Kexec and Kdump are disabled.
Hibernation (suspend on disk) is disabled.
Access to /dev/kmem and
/dev/mem is not possible, not even as root user.
Access to the I/O port is not possible, not even as root user. All X11 graphical drivers must use a kernel driver.
PCI BAR access through sysfs is not possible.
custom_method in ACPI is not available.
debugfs for asus-wmi module is not available.
the acpi_rsdp parameter does not have any effect on
the kernel.
http://www.uefi.org —UEFI home page where you can find the current UEFI specifications.
Blog posts by Olaf Kirch and Vojtěch Pavlík (the chapter above is heavily based on these posts):
http://en.opensuse.org/openSUSE:UEFI —UEFI with openSUSE.
This chapter starts with information about various software packages, the
virtual consoles and the keyboard layout. We talk about software components
like bash,
cron and
logrotate, because they were
changed or enhanced during the last release cycles. Even if they are small
or considered of minor importance, users should change their default
behavior, because these components are often closely coupled with the
system. The chapter concludes with a section about language and
country-specific settings (I18N and L10N).
The programs bash,
cron,
logrotate,
locate,
ulimit and
free are very important for system
administrators and many users. Man pages and info pages are two useful
sources of information about commands, but both are not always available. GNU
Emacs is a popular and very configurable text editor.
bash Package and /etc/profile #Bash is the default system shell. When used as a login shell, it reads several initialization files. Bash processes them in the order they appear in this list:
/etc/profile
~/.profile
/etc/bash.bashrc
~/.bashrc
Make custom settings in ~/.profile or
~/.bashrc. To ensure the correct processing of these
files, it is necessary to copy the basic settings from
/etc/skel/.profile or
/etc/skel/.bashrc into the home directory of the user.
It is recommended to copy the settings from /etc/skel
after an update. Execute the following shell commands to prevent the loss of
personal adjustments:
tux >mv ~/.bashrc ~/.bashrc.oldtux >cp /etc/skel/.bashrc ~/.bashrctux >mv ~/.profile ~/.profile.oldtux >cp /etc/skel/.profile ~/.profile
Then copy personal adjustments back from the *.old files.
Use cron to automatically run
commands in the background at
predefined times. cron uses specially
formatted time tables, and the tool comes with several default ones. Users
can also specify custom tables, if needed.
The cron tables are located in /var/spool/cron/tabs.
/etc/crontab serves as a systemwide cron table. Enter
the user name to run the command directly after the time table and before
the command. In Example 15.1, “Entry in /etc/crontab”,
root is entered. Package-specific
tables, located in /etc/cron.d, have the same format.
See the cron man page (man cron).
1-59/5 * * * * root test -x /usr/sbin/atrun && /usr/sbin/atrun
You cannot edit /etc/crontab by calling the command
crontab -e. This file must be loaded directly into an
editor, then modified and saved.
A number of packages install shell scripts to the directories
/etc/cron.hourly, /etc/cron.daily,
/etc/cron.weekly and
/etc/cron.monthly, whose execution is controlled by
/usr/lib/cron/run-crons.
/usr/lib/cron/run-crons is run every 15 minutes from
the main table (/etc/crontab). This guarantees that
processes that may have been neglected can be run at the proper time.
To run the hourly, daily or other
periodic maintenance scripts at custom times, remove the time stamp files
regularly using /etc/crontab entries (see
Example 15.2, “/etc/crontab: Remove Time Stamp Files”, which removes the
hourly one before every full hour, the
daily one once a day at 2:14 a.m., etc.).
59 * * * * root rm -f /var/spool/cron/lastrun/cron.hourly 14 2 * * * root rm -f /var/spool/cron/lastrun/cron.daily 29 2 * * 6 root rm -f /var/spool/cron/lastrun/cron.weekly 44 2 1 * * root rm -f /var/spool/cron/lastrun/cron.monthly
Or you can set DAILY_TIME in
/etc/sysconfig/cron to the time at which
cron.daily should start. The setting of
MAX_NOT_RUN ensures that the daily tasks get triggered to
run, even if the user did not turn on the computer at the specified
DAILY_TIME for a longer time. The maximum value of
MAX_NOT_RUN is 14 days.
The daily system maintenance jobs are distributed to various scripts for
reasons of clarity. They are contained in the package
aaa_base.
/etc/cron.daily contains, for example, the components
suse.de-backup-rpmdb,
suse.de-clean-tmp or
suse.de-cron-local.
To avoid the mail-flood caused by cron status messages, the default value of
SEND_MAIL_ON_NO_ERROR in
/etc/sysconfig/cron is set to "no"
for new installations. Even with this setting to "no",
cron data output will still be sent to the MAILTO
address, as documented in the cron man page.
In the update case it is recommended to set these values according to your needs.
There are several system services (daemons) that, along
with the kernel itself, regularly record the system status and specific
events onto log files. This way, the administrator can regularly check the
status of the system at a certain point in time, recognize errors or faulty
functions and troubleshoot them with pinpoint precision. These log files are
normally stored in /var/log as specified by FHS and grow
on a daily basis. The logrotate package helps
control the growth of these files. For more details refer to Section 3.3, “Managing Log Files with logrotate”.
locate Command #
locate, a command for quickly finding files, is not
included in the standard scope of installed software. If desired, install
the package mlocate, the successor of the package
findutils-locate. The
updatedb process is started
automatically every night or about 15 minutes after booting the system.
ulimit Command #
With the ulimit (user limits)
command, it is possible to set limits for the use of system resources and to
have these displayed. ulimit is especially useful for
limiting available memory for applications. With this, an application can be
prevented from co-opting too much of the system resources and slowing or
even hanging up the operating system.
ulimit can be used with various options. To limit memory
usage, use the options listed in Table 15.1, “ulimit: Setting Resources for the User”.
ulimit: Setting Resources for the User #|
|
The maximum resident set size |
|
|
The maximum amount of virtual memory available to the shell |
|
|
The maximum size of the stack |
|
|
The maximum size of core files created |
|
|
All current limits are reported |
Systemwide default entries are set in /etc/profile.
Editing this file directly is not recommended, because changes will be
overwritten during system upgrades. To customize systemwide profile
settings, use /etc/profile.local. Per-user settings
should be made in
~USER/.bashrc.
ulimit: Settings in ~/.bashrc ## Limits maximum resident set size (physical memory): ulimit -m 98304 # Limits of virtual memory: ulimit -v 98304
Memory allocations must be specified in KB. For more detailed information,
see man bash.
ulimit Support
Not all shells support ulimit directives. PAM (for
example, pam_limits) offers comprehensive adjustment
possibilities as an alternative to ulimit.
free Command #
The free command displays the total amount of free and
used physical memory as well as swap space in the system and the buffers and
cache consumed by the kernel. The concept of available
RAM dates back to before the days of unified memory management.
The slogan free memory is bad memory applies well to
Linux. As a result, Linux has always made the effort to balance out caches
without actually allowing free or unused memory.
The kernel does not have direct knowledge of any applications or user data.
Instead, it manages applications and user data in a page
cache. If memory runs short, parts of it are written to the swap
partition or to files, from which they can initially be read using the
mmap command (see man mmap).
The kernel also contains other caches, such as the slab
cache, where the caches used for network access are stored. This
may explain the differences between the counters in
/proc/meminfo. Most, but not all, of them can be
accessed via /proc/slabinfo.
However, if your goal is to find out how much RAM is currently being used,
find this information in /proc/meminfo.
For some GNU applications (such as tar), the man pages are no longer
maintained. For these commands, use the --help option to
get a quick overview of the info pages, which provide more in-depth
instructions. Info
is GNU's hypertext system. Read an introduction to this system by entering
info info. Info pages can be viewed with
Emacs by entering emacs -f info or
directly in a console with info. You can also use tkinfo,
xinfo or the help system to view info pages.
man Command #
To read a man page enter man
MAN_PAGE. If a man page with the same name exists
in different sections, they will all be listed with the corresponding
section numbers. Select the one to display. If you do not enter a section
number within a few seconds, the first man page will be displayed.
To change this to the default system behavior, set
MAN_POSIXLY_CORRECT=1 in a shell initialization file such
as ~/.bashrc.
GNU Emacs is a complex work environment. The following sections cover the configuration files processed when GNU Emacs is started. More information is available at http://www.gnu.org/software/emacs/.
On start-up, Emacs reads several files containing the settings of the user,
system administrator and distributor for customization or preconfiguration.
The initialization file ~/.emacs is installed to the
home directories of the individual users from /etc/skel.
.emacs, in turn, reads the file
/etc/skel/.gnu-emacs. To customize the program, copy
.gnu-emacs to the home directory (with cp
/etc/skel/.gnu-emacs ~/.gnu-emacs) and make the desired settings
there.
.gnu-emacs defines the file
~/.gnu-emacs-custom as custom-file.
If users make settings with the customize options in
Emacs, the settings are saved to ~/.gnu-emacs-custom.
With openSUSE Leap, the emacs
package installs the file site-start.el in the directory
/usr/share/emacs/site-lisp. The file
site-start.el is loaded before the initialization file
~/.emacs. Among other things,
site-start.el ensures that special configuration files
distributed with Emacs add-on packages, such as
psgml, are loaded automatically.
Configuration files of this type are located in
/usr/share/emacs/site-lisp, too, and always begin with
suse-start-. The local system administrator can specify
systemwide settings in default.el.
More information about these files is available in the Emacs info file under
Init File: info:/emacs/InitFile.
Information about how to disable the loading of these files (if necessary) is
also provided at this location.
The components of Emacs are divided into several packages:
The base package emacs.
emacs-x11 (usually installed):
the program with X11 support.
emacs-nox: the program
without X11 support.
emacs-info: online documentation
in info format.
emacs-el: the uncompiled library
files in Emacs Lisp. These are not required at runtime.
Numerous add-on packages can be installed if needed:
emacs-auctex (LaTeX),
psgml (SGML and XML),
gnuserv (client and server
operation) and others.
Linux is a multiuser and multitasking system. The advantages of these features can be appreciated even on a stand-alone PC system. In text mode, there are six virtual consoles available. Switch between them using Alt–F1 through Alt–F6. The seventh console is reserved for X and the tenth console shows kernel messages.
To switch to a console from X without shutting it down, use Ctrl–Alt–F1 to Ctrl–Alt–F6. To return to X, press Alt–F7.
To standardize the keyboard mapping of programs, changes were made to the following files:
/etc/inputrc /etc/X11/Xmodmap /etc/skel/.emacs /etc/skel/.gnu-emacs /etc/skel/.vimrc /etc/csh.cshrc /etc/termcap /usr/share/terminfo/x/xterm /usr/share/X11/app-defaults/XTerm /usr/share/emacs/VERSION/site-lisp/term/*.el
These changes only affect applications that use terminfo
entries or whose configuration files are changed directly
(vi, emacs, etc.). Applications not
shipped with the system should be adapted to these defaults.
Under X, the compose key (multikey) can be enabled as explained in
/etc/X11/Xmodmap.
Further settings are possible using the X Keyboard Extension (XKB). This extension is also used by the desktop environment GNOME (gswitchit).
Information about XKB is available in the documents listed in
/usr/share/doc/packages/xkeyboard-config (part of the
xkeyboard-config package).
The system is, to a very large extent, internationalized and can be modified for local needs. Internationalization (I18N) allows specific localization (L10N). The abbreviations I18N and L10N are derived from the first and last letters of the words and, in between, the number of letters omitted.
Settings are made with LC_ variables defined in the
file /etc/sysconfig/language. This refers not only to
native language support, but also to the categories
Messages (Language), Character Set,
Sort Order, Time and Date,
Numbers and Money. Each of these
categories can be defined directly with its own variable or indirectly with a
master variable in the file language (see the
locale man page).
RC_LC_MESSAGES,
RC_LC_CTYPE,
RC_LC_COLLATE,
RC_LC_TIME,
RC_LC_NUMERIC,
RC_LC_MONETARY
These variables are passed to the shell without the
RC_ prefix and represent the listed categories.
The shell profiles concerned are listed below. The current setting can be
shown with the command locale.
RC_LC_ALL
This variable, if set, overwrites the values of the variables already mentioned.
RC_LANG
If none of the previous variables are set, this is the fallback. By
default, only RC_LANG is set. This makes it
easier for users to enter their own values.
ROOT_USES_LANG
A yes or no variable. If set to
no, root
always works in the POSIX environment.
The variables can be set with the YaST sysconfig editor. The value of such a variable contains the language code, country code, encoding and modifier. The individual components are connected by special characters:
LANG=<language>[[_<COUNTRY>].<Encoding>[@<Modifier>]]
You should always set the language and country codes together. Language settings follow the standard ISO 639 available at http://www.evertype.com/standards/iso639/iso639-en.html and http://www.loc.gov/standards/iso639-2/. Country codes are listed in ISO 3166, see http://en.wikipedia.org/wiki/ISO_3166.
It only makes sense to set values for which usable description files can be
found in /usr/lib/locale. Additional description files
can be created from the files in /usr/share/i18n using
the command localedef. The description files are part of
the glibc-i18ndata package. A description file for
en_US.UTF-8 (for English and United States) can be
created with:
localedef -i en_US -f UTF-8 en_US.UTF-8
LANG=en_US.UTF-8
This is the default setting if American English is selected during installation. If you selected another language, that language is enabled but still with UTF-8 as the character encoding.
LANG=en_US.ISO-8859-1
This sets the language to English, country to United States and the
character set to ISO-8859-1. This character set does
not support the Euro sign, but it can be useful sometimes for programs
that have not been updated to support UTF-8. The
string defining the charset (ISO-8859-1 in this case)
is then evaluated by programs like Emacs.
LANG=en_IE@euro
The above example explicitly includes the Euro sign in a language setting. This setting is obsolete now, as UTF-8 also covers the Euro symbol. It is only useful if an application supports ISO-8859-15 and not UTF-8.
Changes to /etc/sysconfig/language are activated by the
following process chain:
For the Bash: /etc/profile reads
/etc/profile.d/lang.sh which, in turn, analyzes
/etc/sysconfig/language.
For tcsh: At login, /etc/csh.login reads
/etc/profile.d/lang.csh which, in turn, analyzes
/etc/sysconfig/language.
This ensures that any changes to
/etc/sysconfig/language are available at the next login
to the respective shell, without having to manually activate
them.
Users can override the system defaults by editing their
~/.bashrc accordingly. For example, if you do not want
to use the system-wide en_US for program messages,
include LC_MESSAGES=es_ES so that messages are
displayed in Spanish instead.
~/.i18n #
If you are not satisfied with locale system defaults, change the settings in
~/.i18n according to the Bash scripting syntax. Entries
in ~/.i18n override system defaults from
/etc/sysconfig/language. Use the same variable names
but without the RC_ name space prefixes. For example, use
LANG instead of RC_LANG:
LANG=cs_CZ.UTF-8 LC_COLLATE=C
Files in the category Messages are, as a rule, only
stored in the corresponding language directory (like
en) to have a fallback. If you set
LANG to en_US and the message
file in /usr/share/locale/en_US/LC_MESSAGES does not
exist, it falls back to
/usr/share/locale/en/LC_MESSAGES.
A fallback chain can also be defined, for example, for Breton to French or for Galician to Spanish to Portuguese:
LANGUAGE="br_FR:fr_FR"
LANGUAGE="gl_ES:es_ES:pt_PT"
If desired, use the Norwegian variants Nynorsk and Bokmål instead (with
additional fallback to no):
LANG="nn_NO"
LANGUAGE="nn_NO:nb_NO:no"
or
LANG="nb_NO"
LANGUAGE="nb_NO:nn_NO:no"
Note that in Norwegian, LC_TIME is also treated
differently.
One problem that can arise is a separator used to delimit groups of digits
not being recognized properly. This occurs if LANG
is set to only a two-letter language code like de, but
the definition file glibc uses is located in
/usr/share/lib/de_DE/LC_NUMERIC. Thus
LC_NUMERIC must be set to de_DE
to make the separator definition visible to the system.
The GNU C Library Reference Manual, Chapter
“Locales and Internationalization”. It is included in
glibc-info.
Markus Kuhn, UTF-8 and Unicode FAQ for Unix/Linux, currently at http://www.cl.cam.ac.uk/~mgk25/unicode.html.
Unicode-HOWTO by Bruno Haible, available at http://tldp.org/HOWTO/Unicode-HOWTO-1.html.
udev #/dev Directoryuevents and udevudev Daemonudev Rulesudev
The kernel can add or remove almost any device in a running system. Changes
in the device state (whether a device is plugged in or removed) need to be
propagated to user space. Devices need to be configured when they are
plugged in and recognized. Users of a certain device need to be informed
about any changes in this device's recognized state.
udev provides the needed
infrastructure to dynamically maintain the device node files and symbolic
links in the /dev directory.
udev rules provide a way to plug
external tools into the kernel device event processing. This allows you to
customize udev device handling by adding certain scripts to execute as part of kernel device
handling, or request and import additional data to evaluate during device
handling.
/dev Directory #
The device nodes in the /dev directory provide access
to the corresponding kernel devices. With
udev, the /dev
directory reflects the current state of the kernel. Every kernel device has
one corresponding device file. If a device is disconnected from the system,
the device node is removed.
The content of the /dev directory is kept on a
temporary file system and all files are rendered at every system start-up.
Manually created or modified files do not, by design, survive a reboot.
Static files and directories that should always be in the
/dev directory regardless of the state of the
corresponding kernel device can be created with systemd-tmpfiles. The
configuration files are found in /usr/lib/tmpfiles.d/
and /etc/tmpfiles.d/; for more information, see the
systemd-tmpfiles(8) man page.
uevents and udev #
The required device information is exported by the
sysfs file system. For every
device the kernel has detected and initialized, a directory with the device
name is created. It contains attribute files with device-specific
properties.
Every time a device is added or removed, the kernel sends a uevent to notify
udev of the change. The
udev daemon reads and parses all
provided rules from the /etc/udev/rules.d/*.rules files
once at start-up and keeps them in memory. If rules files are changed, added
or removed, the daemon can reload the in-memory representation of all rules
with the command udevadm control --reload. For more
details on udev rules and their
syntax, refer to Section 16.6, “Influencing Kernel Device Event Handling with udev Rules”.
Every received event is matched against the set of provides rules. The rules
can add or change event environment keys, request a specific name for the
device node to create, add symbolic links pointing to the node or add
programs to run after the device node is created. The driver core
uevents are received from a kernel
netlink socket.
The kernel bus drivers probe for devices. For every detected device, the
kernel creates an internal device structure while the driver core sends a
uevent to the udev daemon. Bus
devices identify themselves by a specially-formatted ID, which tells what
kind of device it is. Usually these IDs consist of vendor and product ID and
other subsystem-specific values. Every bus has its own scheme for these IDs,
called MODALIAS. The kernel takes the device information,
composes a MODALIAS ID string from it and sends that string
along with the event. For a USB mouse, it looks like this:
MODALIAS=usb:v046DpC03Ed2000dc00dsc00dp00ic03isc01ip02
Every device driver carries a list of known aliases for devices it can
handle. The list is contained in the kernel module file itself. The program
depmod reads the ID lists and creates the file
modules.alias in the kernel's
/lib/modules directory for all currently available
modules. With this infrastructure, module loading is as easy as calling
modprobe for every event that carries a
MODALIAS key. If modprobe $MODALIAS is
called, it matches the device alias composed for the device with the aliases
provided by the modules. If a matching entry is found, that module is
loaded. All this is automatically triggered by
udev.
All device events happening during the boot process before the
udev daemon is running are lost,
because the infrastructure to handle these events resides on the root file
system and is not available at that time. To cover that loss, the kernel
provides a uevent file located in the device directory
of every device in the sysfs
file system. By writing add to that file, the kernel
resends the same event as the one lost during boot. A simple loop over all
uevent files in /sys triggers all
events again to create the device nodes and perform device setup.
As an example, a USB mouse present during boot may not be initialized by the
early boot logic, because the driver is not available at that time. The
event for the device discovery was lost and failed to find a kernel module
for the device. Instead of manually searching for connected
devices, udev requests all device
events from the kernel after the root file system is available, so the event
for the USB mouse device runs again. Now it finds the kernel module on the
mounted root file system and the USB mouse can be initialized.
From user space, there is no visible difference between a device coldplug sequence and a device discovery during runtime. In both cases, the same rules are used to match and the same configured programs are run.
udev Daemon #
The program udevadm monitor can be used to visualize the
driver core events and the timing of the
udev event processes.
UEVENT[1185238505.276660] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1 (usb) UDEV [1185238505.279198] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1 (usb) UEVENT[1185238505.279527] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0 (usb) UDEV [1185238505.285573] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0 (usb) UEVENT[1185238505.298878] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 (input) UDEV [1185238505.305026] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 (input) UEVENT[1185238505.305442] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/mouse2 (input) UEVENT[1185238505.306440] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/event4 (input) UDEV [1185238505.325384] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/event4 (input) UDEV [1185238505.342257] add /devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10/mouse2 (input)
The UEVENT lines show the events the kernel has sent over
netlink. The UDEV lines show the finished
udev event handlers. The timing is
printed in microseconds. The time between UEVENT and
UDEV is the time
udev took to process this event or
the udev daemon has delayed its
execution to synchronize this event with related and already running events.
For example, events for hard disk partitions always wait for the main disk
device event to finish, because the partition events may rely on the data
that the main disk event has queried from the hardware.
udevadm monitor --env shows the complete event
environment:
ACTION=add DEVPATH=/devices/pci0000:00/0000:00:1d.2/usb3/3-1/3-1:1.0/input/input10 SUBSYSTEM=input SEQNUM=1181 NAME="Logitech USB-PS/2 Optical Mouse" PHYS="usb-0000:00:1d.2-1/input0" UNIQ="" EV=7 KEY=70000 0 0 0 0 REL=103 MODALIAS=input:b0003v046DpC03Ee0110-e0,1,2,k110,111,112,r0,1,8,amlsfw
udev also sends messages to syslog.
The default syslog priority that controls which messages are sent to syslog
is specified in the udev
configuration file /etc/udev/udev.conf. The log
priority of the running daemon can be changed with udevadm control
--log_priority=LEVEL/NUMBER.
udev Rules #
A udev rule can match any property
the kernel adds to the event itself or any information that the kernel
exports to sysfs. The rule can also request additional
information from external programs. Every event is matched against all
provided rules. All rules are located in the
/etc/udev/rules.d directory.
Every line in the rules file contains at least one key value pair. There are
two kinds of keys, match and assignment keys. If all match keys match their
values, the rule is applied and the assignment keys are assigned the
specified value. A matching rule may specify the name of the device node,
add symbolic links pointing to the node or run a specified program as part
of the event handling. If no matching rule is found, the default device node
name is used to create the device node. Detailed information about the rule
syntax and the provided keys to match or import data are described in the
udev man page. The following
example rules provide a basic introduction to
udev rule syntax. The example rules
are all taken from the udev default
rule set that is located under
/etc/udev/rules.d/50-udev-default.rules.
udev Rules ## console
KERNEL=="console", MODE="0600", OPTIONS="last_rule"
# serial devices
KERNEL=="ttyUSB*", ATTRS{product}=="[Pp]alm*Handheld*", SYMLINK+="pilot"
# printer
SUBSYSTEM=="usb", KERNEL=="lp*", NAME="usb/%k", SYMLINK+="usb%k", GROUP="lp"
# kernel firmware loader
SUBSYSTEM=="firmware", ACTION=="add", RUN+="firmware.sh"
The console rule consists of three keys: one match
key (KERNEL) and two assign keys
(MODE, OPTIONS). The
KERNEL match rule searches the device list for any items
of the type console. Only exact matches are valid and
trigger this rule to be executed. The MODE key assigns
special permissions to the device node, in this case, read and write
permissions to the owner of this device only. The OPTIONS
key makes this rule the last rule to be applied to any device of this type.
Any later rule matching this particular device type does not have any
effect.
The serial devices rule is not available in
50-udev-default.rules anymore, but it is still worth
considering. It consists of two match keys (KERNEL and
ATTRS) and one assign key (SYMLINK).
The KERNEL key searches for all devices of the
ttyUSB type. Using the * wild card,
this key matches several of these devices. The second match key,
ATTRS, checks whether the product
attribute file in sysfs for any ttyUSB
device contains a certain string. The assign key
(SYMLINK) triggers the addition of a symbolic link to
this device under /dev/pilot. The operator used in this
key (+=) tells
udev to additionally perform this
action, even if previous or later rules add other symbolic links. As this
rule contains two match keys, it is only applied if both conditions are met.
The printer rule deals with USB printers and
contains two match keys which must both apply to get the entire rule applied
(SUBSYSTEM and KERNEL). Three assign
keys deal with the naming for this device type (NAME),
the creation of symbolic device links (SYMLINK) and the
group membership for this device type (GROUP). Using the
* wild card in the KERNEL key makes it
match several lp printer devices. Substitutions are used
in both, the NAME and the SYMLINK keys
to extend these strings by the internal device name. For example, the
symbolic link to the first lp USB printer would read
/dev/usblp0.
The kernel firmware loader rule makes
udev load additional firmware by an
external helper script during runtime. The SUBSYSTEM
match key searches for the firmware subsystem. The
ACTION key checks whether any device belonging to the
firmware subsystem has been added. The
RUN+= key triggers the execution of the
firmware.sh script to locate the firmware that is to be
loaded.
Some general characteristics are common to all rules:
Each rule consists of one or more key value pairs separated by a comma.
A key's operation is determined by the operator.
udev rules support several
operators.
Each given value must be enclosed by quotation marks.
Each line of the rules file represents one rule. If a rule is longer than
one line, use \ to join the different lines as you
would do in shell syntax.
udev rules support a shell-style
pattern that matches the *, ?, and
[] patterns.
udev rules support substitutions.
udev Rules #Creating keys you can choose from several operators, depending on the type of key you want to create. Match keys will normally be used to find a value that either matches or explicitly mismatches the search value. Match keys contain either of the following operators:
==
Compare for equality. If the key contains a search pattern, all results matching this pattern are valid.
!=
Compare for non-equality. If the key contains a search pattern, all results matching this pattern are valid.
Any of the following operators can be used with assign keys:
=
Assign a value to a key. If the key previously consisted of a list of values, the key resets and only the single value is assigned.
+=
Add a value to a key that contains a list of entries.
:=
Assign a final value. Disallow any later change by later rules.
udev Rules #
udev rules support the use of
placeholders and substitutions. Use them in a similar fashion as you would
do in any other scripts. The following substitutions can be used with
udev rules:
%r, $root
The device directory, /dev by default.
%p, $devpath
The value of DEVPATH.
%k, $kernel
The value of KERNEL or the internal device name.
%n, $number
The device number.
%N, $tempnode
The temporary name of the device file.
%M, $major
The major number of the device.
%m, $minor
The minor number of the device.
%s{ATTRIBUTE},
$attr{ATTRIBUTE}
The value of a sysfs attribute (specified by
ATTRIBUTE).
%E{VARIABLE},
$env{VARIABLE}
The value of an environment variable (specified by VARIABLE).
%c, $result
The output of PROGRAM.
%%
The % character.
$$
The $ character.
udev Match Keys #
Match keys describe conditions that must be met before a
udev rule can be applied. The
following match keys are available:
ACTION
The name of the event action, for example, add or
remove when adding or removing a device.
DEVPATH
The device path of the event device, for example,
DEVPATH=/bus/pci/drivers/ipw3945 to search for all
events related to the ipw3945 driver.
KERNEL
The internal (kernel) name of the event device.
SUBSYSTEM
The subsystem of the event device, for example,
SUBSYSTEM=usb for all events related to USB devices.
ATTR{FILENAME}
sysfs attributes of the
event device. To match a string contained in the
vendor attribute file name, you could use
ATTR{vendor}=="On[sS]tream", for example.
KERNELS
Let udev search the device path
upwards for a matching device name.
SUBSYSTEMS
Let udev search the device path
upwards for a matching device subsystem name.
DRIVERS
Let udev search the device path
upwards for a matching device driver name.
ATTRS{FILENAME}
Let udev search the device path
upwards for a device with matching
sysfs attribute values.
ENV{KEY}
The value of an environment variable, for example,
ENV{ID_BUS}="ieee1394 to search for all events
related to the FireWire bus ID.
PROGRAM
Let udev execute an external
program. To be successful, the program must return with exit code zero.
The program's output, printed to STDOUT, is available to the
RESULT key.
RESULT
Match the output string of the last PROGRAM call.
Either include this key in the same rule as the
PROGRAM key or in a later one.
udev Assign Keys #
In contrast to the match keys described above, assign keys do not describe
conditions that must be met. They assign values, names and actions to the
device nodes maintained by udev.
NAME
The name of the device node to be created. After a rule has set a node
name, all other rules with a NAME key for this node
are ignored.
SYMLINK
The name of a symbolic link related to the node to be created. Multiple matching rules can add symbolic links to be created with the device node. You can also specify multiple symbolic links for one node in one rule using the space character to separate the symbolic link names.
OWNER, GROUP, MODE
The permissions for the new device node. Values specified here overwrite anything that has been compiled in.
ATTR{KEY}
Specify a value to be written to a
sysfs attribute of the event
device. If the == operator is used, this key is also
used to match against the value of a
sysfs attribute.
ENV{KEY}
Tell udev to export a variable
to the environment. If the == operator is used, this
key is also used to match against an environment variable.
RUN
Tell udev to add a program to
the list of programs to be executed for this device. Keep in mind to
restrict this to very short tasks to avoid blocking further events for
this device.
LABEL
Add a label where a GOTO can jump to.
GOTO
Tell udev to skip several
rules and continue with the one that carries the label referenced by the
GOTO key.
IMPORT{TYPE}
Load variables into the event environment such as the output of an
external program. udev imports
variables of several types. If no type is specified,
udev tries to determine the
type itself based on the executable bit of the file permissions.
program tells
udev to execute an external
program and import its output.
file tells
udev to import a text file.
parent tells
udev to import the stored
keys from the parent device.
WAIT_FOR_SYSFS
Tells udev to wait for the
specified sysfs file to be
created for a certain device. For example,
WAIT_FOR_SYSFS="ioerr_cnt" informs
udev to wait until the
ioerr_cnt file has been created.
OPTIONS
The OPTION key may have several values:
last_rule tells
udev to ignore all later
rules.
ignore_device tells
udev to ignore this event
completely.
ignore_remove tells
udev to ignore all later
remove events for the device.
all_partitions tells
udev to create device nodes
for all available partitions on a block device.
The dynamic device directory and the
udev rules infrastructure make it
possible to provide stable names for all disk devices—regardless of
their order of recognition or the connection used for the device. Every
appropriate block device the kernel creates is examined by tools with
special knowledge about certain buses, drive types or file systems. Along
with the dynamic kernel-provided device node name,
udev maintains classes of
persistent symbolic links pointing to the device:
/dev/disk
|-- by-id
| |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B -> ../../sda
| |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part1 -> ../../sda1
| |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part6 -> ../../sda6
| |-- scsi-SATA_HTS726060M9AT00_MRH453M4HWHG7B-part7 -> ../../sda7
| |-- usb-Generic_STORAGE_DEVICE_02773 -> ../../sdd
| `-- usb-Generic_STORAGE_DEVICE_02773-part1 -> ../../sdd1
|-- by-label
| |-- Photos -> ../../sdd1
| |-- SUSE10 -> ../../sda7
| `-- devel -> ../../sda6
|-- by-path
| |-- pci-0000:00:1f.2-scsi-0:0:0:0 -> ../../sda
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part6 -> ../../sda6
| |-- pci-0000:00:1f.2-scsi-0:0:0:0-part7 -> ../../sda7
| |-- pci-0000:00:1f.2-scsi-1:0:0:0 -> ../../sr0
| |-- usb-02773:0:0:2 -> ../../sdd
| |-- usb-02773:0:0:2-part1 -> ../../sdd1
`-- by-uuid
|-- 159a47a4-e6e6-40be-a757-a629991479ae -> ../../sda7
|-- 3e999973-00c9-4917-9442-b7633bd95b9e -> ../../sda6
`-- 4210-8F8C -> ../../sdd1udev #/sys/*
Virtual file system provided by the Linux kernel, exporting all currently
known devices. This information is used by
udev to create device nodes in
/dev
/dev/*
Dynamically created device nodes and static content created with
systemd-tmpfiles; for more information, see the
systemd-tmpfiles(8) man page.
The following files and directories contain the crucial elements of the
udev infrastructure:
/etc/udev/udev.conf
Main udev configuration file.
/etc/udev/rules.d/*
udev event matching rules.
/usr/lib/tmpfiles.d/ and
/etc/tmpfiles.d/
Responsible for static /dev content.
/usr/lib/udev/*
Helper programs called from udev
rules.
For more information about the udev
infrastructure, refer to the following man pages:
udev
General information about udev,
keys, rules and other important configuration issues.
udevadm
udevadm can be used to control the runtime behavior of
udev, request kernel events,
manage the event queue and provide simple debugging mechanisms.
udevd
Information about the udev event
managing daemon.
Configuring a network client requires detailed knowledge about services provided over the network (such as printing or LDAP, for example). To make it easier to configure such services on a network client, the “service location protocol” (SLP) was developed. SLP makes the availability and configuration data of selected services known to all clients in the local network. Applications that support SLP can use this information to be configured automatically.
The NTP (network time protocol) mechanism is a protocol for synchronizing the system time over the network. First, a machine can obtain the time from a server that is a reliable time source. Second, a machine can itself act as a time source for other computers in the network. The goal is twofold—maintaining the absolute time and synchronizing the system time of all machines within a network.
DNS (domain name system) is needed to resolve the domain names and host
names into IP addresses. In this way, the IP address 192.168.2.100 is assigned to
the host name jupiter, for example. Before setting up your
own name server, read the general information about DNS in
Section 13.3, “Name Resolution”. The following configuration
examples refer to BIND, the default DNS server.
The purpose of the Dynamic Host Configuration Protocol (DHCP) is to assign network settings centrally (from a server) rather than configuring them locally on every workstation. A host configured to use DHCP does not have control over its own static address. It is enabled to configure itself completely and automatically according to directions from the server. If you use the NetworkManager on the client side, you do not need to configure the client. This is useful if you have changing environments and only one interface active at a time. Never use NetworkManager on a machine that runs a DHCP server.
Using Samba, a Unix machine can be configured as a file and print server for macOS, Windows, and OS/2 machines. Samba has developed into a fully-fledged and rather complex product. Configure Samba with YaST, or by editing the configuration file manually.
The Network File System (NFS) is a protocol that allows access to files on a server very similar to accessing local files.
autofs is a program that automatically mounts
specified directories on an on-demand basis. It is based on a kernel module
for high efficiency, and can manage both local directories and network
shares. These automatic mount points are mounted only when they are
accessed, and unmounted after a certain period of inactivity. This
on-demand behavior saves bandwidth and results in better performance than
static mounts managed by /etc/fstab. While
autofs is a control script,
automount is the command (daemon) that does the actual
auto-mounting.
According to the survey from http://www.netcraft.com/, the Apache HTTP Server (Apache) is the world's most widely-used Web server. Developed by the Apache Software Foundation (http://www.apache.org/), it is available for most operating systems. openSUSE® Leap includes Apache version 2.4. In this chapter, learn how to install, configure and set up a Web server; how to use SSL, CGI, and additional modules; and how to troubleshoot Apache.
Using the YaST module, you can configure your machine to function as an FTP (File Transfer Protocol) server. Anonymous and/or authenticated users can connect to your machine and download files using the FTP protocol. Depending on the configuration, they can also upload files to the FTP server. YaST uses vsftpd (Very Secure FTP Daemon).
Squid is a widely-used proxy cache for Linux and Unix platforms. This means that it stores requested Internet objects, such as data on a Web or FTP server, on a machine that is closer to the requesting workstation than the server. It can be set up in multiple hierarchies to assure optimal response times and low bandwidth usage, even in modes that are transparent to end users.
Configuring a network client requires detailed knowledge about services provided over the network (such as printing or LDAP, for example). To make it easier to configure such services on a network client, the “service location protocol” (SLP) was developed. SLP makes the availability and configuration data of selected services known to all clients in the local network. Applications that support SLP can use this information to be configured automatically.
openSUSE® Leap supports installation using installation sources provided with SLP and contains many system services with integrated support for SLP. You can use SLP to provide networked clients with central functions, such as an installation server, file server, or print server on your system. Services that offer SLP support include cupsd, login, ntp, openldap2, postfix, rpasswd, rsyncd, saned, sshd (via fish), vnc, and ypserv.
All packages necessary to use SLP services on a network client are installed
by default. However, if you want to provide services via
SLP, check that the openslp-server package is
installed.
slptool #
slptool is a command line tool to query and register SLP
services. The query functions are useful for diagnostic purposes. The most
important slptool subcommands are listed below.
slptool --help lists all available
options and functions.
List all service types available on the network.
tux > slptool findsrvtypes
service:install.suse:nfs
service:install.suse:ftp
service:install.suse:http
service:install.suse:smb
service:ssh
service:fish
service:YaST.installation.suse:vnc
service:smtp
service:domain
service:management-software.IBM:hardware-management-console
service:rsync
service:ntp
service:ypservList all servers providing SERVICE_TYPE
tux > slptool findsrvs service:ntp
service:ntp://ntp.example.com:123,57810
service:ntp://ntp2.example.com:123,57810List attributes for SERVICE_TYPE on HOST
tux > slptool findattrs service:ntp://ntp.example.com
(owner=tux),(email=tux@example.com)Registers SERVICE_TYPE on HOST with an optional list of attributes
slptool register service:ntp://ntp.example.com:57810 \ "(owner=tux),(email=tux@example.com)"
De-registers SERVICE_TYPE on HOST
slptool deregister service:ntp://ntp.example.com
For more information run slptool --help.
To provide SLP services, the SLP daemon
(slpd) must be running. Like most
system services in openSUSE Leap,
slpd is controlled by means of a
separate start script. After the installation, the daemon is inactive by
default. To activate it for the current session, run sudo systemctl
start slpd. If slpd should
be activated on system start-up, run sudo systemctl enable
slpd.
Many applications in openSUSE Leap have integrated SLP support via the
libslp library. If a service has not been compiled with
SLP support, use one of the following methods to make it available via SLP:
/etc/slp.reg.d
Create a separate registration file for each new service. The following example registers a scanner service:
## Register a saned service on this system ## en means english language ## 65535 disables the timeout, so the service registration does ## not need refreshes service:scanner.sane://$HOSTNAME:6566,en,65535 watch-port-tcp=6566 description=SANE scanner daemon
The most important line in this file is the service
URL, which begins with service:. This
contains the service type (scanner.sane) and the
address under which the service is available on the server.
$HOSTNAME is automatically replaced with the
full host name. The name of the TCP port on which the relevant service
can be found follows, separated by a colon. Then enter the language in
which the service should appear and the duration of registration in
seconds. These should be separated from the service URL by commas. Set
the value for the duration of registration between 0
and 65535. 0 prevents registration.
65535 removes all restrictions.
The registration file also contains the two variables
watch-port-tcp and
description.
watch-port-tcp links the SLP service
announcement to whether the relevant service is active by having
slpd check the status of the
service. The second variable contains a more precise description of the
service that is displayed in suitable browsers.
Some services brokered by YaST, such as an installation server or YOU server, perform this registration automatically when you activate SLP in the module dialogs. YaST then creates registration files for these services.
/etc/slp.reg
The only difference between this method and the procedure with
/etc/slp.reg.d is that all services are grouped
within a central file.
slptool
If a service needs to be registered dynamically without the need of
configuration files, use the slptool command line utility. The same
utility can also be used to de-register an existing service offering
without restarting slpd. See
Section 17.1, “The SLP Front-End slptool” for details.
Announcing the installation data via SLP within your network makes the network installation much easier, since the installation data such as IP address of the server or the path to the installation media are automatically required via SLP query.
RFC 2608 generally deals with the definition of SLP. RFC 2609 deals with the syntax of the service URLs used in greater detail and RFC 2610 deals with DHCP via SLP.
The home page of the OpenSLP project.
/usr/share/doc/packages/openslp
This directory contains the documentation for SLP coming with the
openslp-server package, including a
README.SUSE containing the openSUSE Leap details,
the RFCs, and two introductory HTML documents. Programmers who want to
use the SLP functions will find more information in the
Programmers Guide that is included in the
openslp-devel package.
The NTP (network time protocol) mechanism is a protocol for synchronizing the system time over the network. First, a machine can obtain the time from a server that is a reliable time source. Second, a machine can itself act as a time source for other computers in the network. The goal is twofold—maintaining the absolute time and synchronizing the system time of all machines within a network.
Maintaining an exact system time is important in many situations. The built-in hardware clock does often not meet the requirements of applications such as databases or clusters. Manual correction of the system time would lead to severe problems because, for example, a backward leap can cause malfunction of critical applications. Within a network, it is usually necessary to synchronize the system time of all machines, but manual time adjustment is a bad approach. NTP provides a mechanism to solve these problems. The NTP service continuously adjusts the system time with reliable time servers in the network. It further enables the management of local reference clocks, such as radio-controlled clocks.
Since openSUSE Leap 15, chrony is the default implementation of NTP.
chrony includes two parts; chronyd is a daemon that can be started at
boot time and chronyc is a command line interface program to monitor the
performance of chronyd, and to change various operating parameters at
runtime.
To enable time synchronization by means of active directory, follow the instructions found at Procedure 7.2, “ Joining an Active Directory Domain Using ”.
The NTP daemon (chronyd) coming with the chrony
package is preset to use the local computer hardware clock as a time
reference. The precision of a hardware clock heavily depends on its time
source. For example, an atomic clock or GPS receiver is a very precise time
source, while a common RTC chip is not a reliable time source. YaST
simplifies the configuration of an NTP client.
In the YaST NTP client configuration ( › ) window, you can specify when to start NTP daemon, the type of the configuration source, and add custom time servers.
You can choose from three options when to start the NTP daemon:
Select , if you want to manually start
the chrony daemon.
Select to set the system
time periodically without a permanently running chrony. You can set
the .
Select to start chronyd
automatically when the system is booted. This setting is recommended.
In the drop-down list, select either or . Set if your server uses only a fixed set of (public) NTP servers, while is better if your internal network offers NTP servers via DHCP.
Time servers for the client to query are listed in the lower part of the window. Modify this list as needed with , , and .
Click to add a new time server:
In the field, type the URL of the time server or pool of time servers with which you want to synchronize the machine time. After the URL is complete, click to verify that it points to a valid time source.
Activate to speed up the time
synchronization by sending more requests at the chronyd daemon start.
Activate to speed up the boot time on
systems that start the chronyd daemon automatically and may not have an
internet connection at boot time. This option is useful for example for
laptops whose network connection is managed by NetworkManager.
Confirm with
chrony reads its configuration from the
/etc/chrony.conf file. To keep the computer clock
synchronized, you need to tell chrony what time servers to use. You can
use specific server names or IP addresses, for example:
server 0.europe.pool.ntp.org server 1.europe.pool.ntp.org server 2.europe.pool.ntp.org
You can also specify a pool name. Pool name resolves to several IP addresses:
pool pool.ntp.org
To synchronize time on multiple computers on the same network, we do not
recommend to synchronize all of them with an external server. A good
practice is to make one computer the time server which is synchronized with
an external time server, and the other computers act as its clients. Add a
local directive to the server's
/etc/chrony.conf to distinguish it from an
authoritative time server:
local stratum 10
To start chrony, run:
systemctl start chronyd.service
After initializing chronyd, it takes some time before the time is
stabilized and the drift file for correcting the local computer clock is
created. With the drift file, the systematic error of the hardware clock can
be computed when the computer is powered on. The correction is used
immediately, resulting in a higher stability of the system time.
To enable the service so that chrony starts automatically at boot time,
run:
systemctl enable chronyd.service
chronyd at Runtime using chronyc #
You can use chronyc to change the behavior of chronyd at runtime. It
also generates status reports about the operation of chronyd.
You can run chronyc either in interactive a or non-interactive mode. To
run chronyc interactively, enter chronyc on the command line. It
displays a prompt and waits for your command input. For example, to check
how many NTP sources are online or offline, run:
root #chronycchronyc> activity 200 OK 4 sources online 2 sources offline 1 sources doing burst (return to online) 1 sources doing burst (return to offline) 0 sources with unknown address
To exit chronyc's prompt, enter quit or
exit.
If you do not need to use the interactive prompt, enter the command directly:
root #chronycactivity
Changes made using chronyc are not permanent. They will be lost after the
next chronyd restart. For permanent changes, modify
/etc/chrony.conf.
For a complete list of chronyc commands, see its manual page (man
1 chronyc).
If the system boots without network connection, chronyd starts up, but it
cannot resolve DNS names of the time servers set in the configuration file.
This can happen if you use NetworkManager with an encrypted Wi-Fi.
chronyd keeps trying to resolve the time server names specified by the
server, pool, and peer
directives in an increasing time interval until it succeeds.
If the time server will not be reachable when chronyd is started, you can
specify the offline option:
server server_address offline
chronyd will then not try to poll the server until it is enabled using the
following command:
root # chronyc online server_address
When the auto_offline option is set, chronyd assumes that
the time server has gone offline when 2 requests have been sent to it
without receiving a response. This option avoids the need to run the
'offline' command from chronyc when disconnecting the network link.
The software package chrony relies on other programs (such as
gpsd) to access the timing data via the SHM or SOCK
driver. Use the refclock directive in
/etc/chrony.conf to specify a hardware reference clock
to be used as a time source. It has two mandatory parameters; a driver name
and a driver-specific parameter. The two parameters are followed by zero or
more refclock options. chronyd includes the following
drivers:
PPS - driver for the kernel 'pulse per second' API. For example:
refclock PPS /dev/pps0 lock NMEA refid GPS
SHM - NTP shared memory driver. For example:
refclock SHM 0 poll 3 refid GPS1 refclock SHM 1:perm=0644 refid GPS2
SOCK - Unix domain socket driver. For example:
refclock SOCK /var/run/chrony.ttyS0.sock
PHC - PTP hardware clock driver. For example:
refclock PHC /dev/ptp0 poll 0 dpoll -2 offset -37 refclock PHC /dev/ptp1:nocrossts poll 3 pps
For more information on individual drivers' options, see man 8
chrony.conf.
DNS (domain name system) is needed to resolve the domain names and host
names into IP addresses. In this way, the IP address 192.168.2.100 is assigned to
the host name jupiter, for example. Before setting up your
own name server, read the general information about DNS in
Section 13.3, “Name Resolution”. The following configuration
examples refer to BIND, the default DNS server.
The domain name space is divided into regions called zones. For example,
if you have example.com, you have the
example section (or zone) of the
com domain.
The DNS server is a server that maintains the name and IP information for a domain. You can have a primary DNS server for master zone, a secondary server for slave zone, or a slave server without any zones for caching.
The master zone includes all hosts from your network and a DNS server master zone stores up-to-date records for all the hosts in your domain.
A slave zone is a copy of the master zone. The slave zone DNS server obtains its zone data with zone transfer operations from its master server. The slave zone DNS server responds authoritatively for the zone as long as it has valid (not expired) zone data. If the slave cannot obtain a new copy of the zone data, it stops responding for the zone.
Forwarders are DNS servers to which your DNS server should send queries
it cannot answer. To enable different configuration sources in one
configuration, netconfig is used (see also
man 8 netconfig).
The record is information about name and IP address. Supported records and their syntax are described in BIND documentation. Some special records are:
An NS record tells name servers which machines are in charge of a given domain zone.
The MX (mail exchange) records describe the machines to contact for directing mail across the Internet.
SOA (Start of Authority) record is the first record in a zone file. The SOA record is used when using DNS to synchronize data between multiple computers.
To install a DNS server, start YaST and select › . Choose › and select . Confirm the installation of the dependent packages to finish the installation process.
Alternatively use the following command on the command line:
tux >sudozypper in -t pattern dhcp_dns_server
Use the YaST DNS module to configure a DNS server for the local network. When starting the module for the first time, a wizard starts, prompting you to make a few decisions concerning administration of the server. Completing this initial setup produces a basic server configuration. Use the expert mode to deal with more advanced configuration tasks, such as setting up ACLs, logging, TSIG keys, and other options.
The wizard consists of three steps or dialogs. At the appropriate places in the dialogs, you can enter the expert configuration mode.
When starting the module for the first time, the dialog, shown in Figure 19.1, “DNS Server Installation: Forwarder Settings”, opens. The allows to set the following options:
—If is selected,
can be specified; by default (with
selected), is set to
auto, but here you can either set interface names or
select from the two special policy names STATIC and
STATIC_FALLBACK.
In , specify which service to use: , , or .
For more information about all these settings, see man 8
netconfig.
Forwarders are DNS servers to which your DNS server sends queries it cannot answer itself. Enter their IP address and click .
The dialog consists of several parts and is
responsible for the management of zone files, described in
Section 19.6, “Zone Files”. For a new zone, provide a name for it
in . To add a reverse zone, the name must end in
.in-addr.arpa. Finally, select the
(master, slave, or forward). See
Figure 19.2, “DNS Server Installation: DNS Zones”. Click
to configure other settings of an existing zone. To remove a zone, click
.
In the final dialog, you can open the DNS port in the firewall by clicking . Then decide whether to start the DNS server when booting ( or ). You can also activate LDAP support. See Figure 19.3, “DNS Server Installation: Finish Wizard”.
After starting the module, YaST opens a window displaying several configuration options. Completing it results in a DNS server configuration with the basic functions in place:
Under , define whether the DNS server should be started when the booting the system or manually. To start the DNS server immediately, click . To stop the DNS server, click . To save the current settings, select . You can open the DNS port in the firewall with and modify the firewall settings with .
By selecting , the zone files are managed by an LDAP database. Any changes to zone data written to the LDAP database are picked up by the DNS server when it is restarted or prompted to reload its configuration.
If your local DNS server cannot answer a request, it tries to forward the
request to a , if configured so. This
forwarder may be added manually to the .
If the forwarder is not static like in dial-up connections,
handles the configuration. For more
information about netconfig, see man 8 netconfig.
In this section, set basic server options. From the menu, select the desired item then specify the value in the corresponding text box. Include the new entry by selecting .
To set what the DNS server should log and how, select . Under , specify where the DNS server should write the log data. Use the system-wide log by selecting or specify a different file by selecting . In the latter case, additionally specify a name, the maximum file size in megabytes and the number of log file versions to store.
Further options are available under . Enabling causes every query to be logged, in which case the log file could grow extremely large. For this reason, it is not a good idea to enable this option for other than debugging purposes. To log the data traffic during zone updates between DHCP and DNS server, enable . To log the data traffic during a zone transfer from master to slave, enable . See Figure 19.4, “DNS Server: Logging”.
Use this dialog to define ACLs (access control lists) to enforce access restrictions. After providing a distinct name under , specify an IP address (with or without netmask) under in the following fashion:
{ 192.168.1/24; }The syntax of the configuration file requires that the address ends with a semicolon and is put into curly braces.
The main purpose of TSIGs (transaction signatures) is to secure communications between DHCP and DNS servers. They are described in Section 19.8, “Secure Transactions”.
To generate a TSIG key, enter a distinctive name in the field labeled and specify the file where the key should be stored (). Confirm your choices with .
To use a previously created key, leave the field blank and select the file where it is stored under . After that, confirm with .
To add a slave zone, select , choose the zone type , write the name of the new zone, and click .
In the sub-dialog under , specify the master from which the slave should pull its data. To limit access to the server, select one of the ACLs from the list.
To add a master zone, select , choose the zone
type , write the name of the new zone, and click
. When adding a master zone, a reverse zone is also
needed. For example, when adding the zone
example.com that points to hosts in a subnet
192.168.1.0/24, you should also add a reverse zone for
the IP-address range covered. By definition, this should be named
1.168.192.in-addr.arpa.
To edit a master zone, select , select the master zone from the table, and click . The dialog consists of several pages: (the one opened first), , , , and .
The basic dialog, shown in Figure 19.5, “DNS Server: Zone Editor (Basics)”, lets you define settings for dynamic DNS and access options for zone transfers to clients and slave name servers. To permit the dynamic updating of zones, select as well as the corresponding TSIG key. The key must have been defined before the update action starts. To enable zone transfers, select the corresponding ACLs. ACLs must have been defined already.
In the dialog, select whether to enable zone transfers. Use the listed ACLs to define who can download zones.
The dialog allows you to define alternative name servers for the zones specified. Make sure that your own name server is included in the list. To add a record, enter its name under then confirm with . See Figure 19.6, “DNS Server: Zone Editor (NS Records)”.
To add a mail server for the current zone to the existing list, enter the corresponding address and priority value. After doing so, confirm by selecting . See Figure 19.7, “DNS Server: Zone Editor (MX Records)”.
This page allows you to create SOA (start of authority) records. For an explanation of the individual options, refer to Example 19.6, “The /var/lib/named/example.com.zone File”. Changing SOA records is not supported for dynamic zones managed via LDAP.
This dialog manages name resolution. In ,
enter the host name then select its type. The type
represents the main entry. The value for this should be an IP address
(IPv4). Use for IPv6 addresses.
is an alias. Use the types
and for detailed or partial
records that expand on the information provided in the and tabs. These three
types resolve to an existing A record.
is for reverse zones. It is the opposite of an
A record, for example:
hostname.example.com. IN A 192.168.0.1 1.0.168.192.in-addr.arpa IN PTR hostname.example.com.
To add a reverse zone, follow this procedure:
Start › › .
If you have not added a master forward zone, add it and it.
In the tab, fill the corresponding and , then add the record with and confirm with . If YaST complains about a non-existing record for a name server, add it in the tab.
Back in the window, add a reverse master zone.
the reverse zone, and in the tab, you can see the record type. Add the corresponding and , then click and confirm with .
Add a name server record if needed.
After adding a forward zone, go back to the main menu and select the reverse zone for editing. There in the tab activate the check box and select your forward zone. That way, all changes to the forward zone are automatically updated in the reverse zone.
On a openSUSE® Leap system, the name server BIND (Berkeley
Internet Name Domain) comes preconfigured, so it can be started
right after installation without any problems. Normally, if you already have an Internet connection and entered
127.0.0.1 as the name server
address for localhost in
/etc/resolv.conf, you already have a working
name resolution without needing to know the DNS of the provider. BIND
carries out name resolution via the root name server, a notably slower
process. Normally, the DNS of the provider should be entered with its IP
address in the configuration file /etc/named.conf under
forwarders to ensure effective and secure name
resolution. If this works so far, the name server runs as a pure
caching-only name server. Only when you configure its
own zones it becomes a proper DNS. Find a simple example documented in
/usr/share/doc/packages/bind/config.
Depending on the type of Internet connection or the network connection, the
name server information can automatically be adapted to the current
conditions. To do this, set the
NETCONFIG_DNS_POLICY variable in the
/etc/sysconfig/network/config file to
auto.
However, do not set up an official domain until one is assigned to you by the responsible institution. Even if you have your own domain and it is managed by the provider, you are better off not using it, because BIND would otherwise not forward requests for this domain. The Web server at the provider, for example, would not be accessible for this domain.
To start the name server, enter the command systemctl start
named as root. Check
with systemctl status named whether named (as the name
server process is called) has been started successfully. Test the name
server immediately on the local system with the host or
dig programs, which should return
localhost as the default server
with the address 127.0.0.1. If
this is not the case, /etc/resolv.conf probably
contains an incorrect name server entry or the file does not exist. For the
first test, enter host 127.0.0.1,
which should always work. If you get an error message, use
systemctl status named to see whether the server is
actually running. If the name server does not start or behaves unexpectedly,
check the output of journalctl -e.
To use the name server of the provider (or one already running on your
network) as the forwarder, enter the corresponding IP address or addresses
in the options section under
forwarders. The addresses included in
Example 19.1, “Forwarding Options in named.conf” are examples only. Adjust these entries to your
own setup.
options {
directory "/var/lib/named";
forwarders { 10.11.12.13; 10.11.12.14; };
listen-on { 127.0.0.1; 192.168.1.116; };
allow-query { 127/8; 192.168/16 };
notify no;
};
The options entry is followed by entries for the
zone, localhost, and
0.0.127.in-addr.arpa. The type
hint entry under “.” should always be present. The
corresponding files do not need to be modified and should work as they are.
Also make sure that each entry is closed with a “;” and that
the curly braces are in the correct places. After changing the configuration
file /etc/named.conf or the zone files, tell BIND to
reread them with systemctl reload named. Achieve the same
by stopping and restarting the name server with systemctl restart
named. Stop the server at any time by entering systemctl
stop named.
All the settings for the BIND name server itself are stored in the
/etc/named.conf file. However, the zone data for the
domains to handle (consisting of the host names, IP addresses, and so on)
are stored in separate files in the /var/lib/named
directory. The details of this are described later.
/etc/named.conf is roughly divided into two areas. One
is the options section for general settings and the
other consists of zone entries for the individual
domains. A logging section and
acl (access control list) entries are optional.
Comment lines begin with a # sign or
//. A minimal /etc/named.conf is
shown in Example 19.2, “A Basic /etc/named.conf”.
options {
directory "/var/lib/named";
forwarders { 10.0.0.1; };
notify no;
};
zone "localhost" in {
type master;
file "localhost.zone";
};
zone "0.0.127.in-addr.arpa" in {
type master;
file "127.0.0.zone";
};
zone "." in {
type hint;
file "root.hint";
};
Specifies the directory in which BIND can find the files containing the
zone data. Usually, this is /var/lib/named.
Specifies the name servers (mostly of the provider) to which DNS
requests should be forwarded if they cannot be resolved directly.
Replace IP-ADDRESS with an IP address like
192.168.1.116.
Causes DNS requests to be forwarded before an attempt is made to resolve
them via the root name servers. Instead of forward
first, forward only can be written
to have all requests forwarded and none sent to the root name servers.
This makes sense for firewall configurations.
Tells BIND on which network interfaces and port to accept client
queries. port 53 does not need to be specified
explicitly, because 53 is the default port. Enter
127.0.0.1 to permit requests from the local host. If
you omit this entry entirely, all interfaces are used by default.
Tells BIND on which port it should listen for IPv6 client requests. The
only alternative to any is none.
As far as IPv6 is concerned, the server only accepts wild card
addresses.
This entry is necessary if a firewall is blocking outgoing DNS requests. This tells BIND to post requests externally from port 53 and not from any of the high ports above 1024.
Tells BIND which port to use for IPv6 queries.
Defines the networks from which clients can post DNS requests. Replace
NET with address information like
192.168.2.0/24. The /24 at
the end is an abbreviated expression for the netmask (in this case
255.255.255.0).
Controls which hosts can request zone transfers. In the example, such
requests are completely denied with ! *.
Without this entry, zone transfers can be requested from anywhere
without restrictions.
In the absence of this entry, BIND generates several lines of statistical information per hour in the system's journal. Set it to 0 to suppress these statistics completely or set an interval in minutes.
This option defines at which time intervals BIND clears its cache. This triggers an entry in the system's journal each time it occurs. The time specification is in minutes. The default is 60 minutes.
BIND regularly searches the network interfaces for new or nonexistent
interfaces. If this value is set to 0, this is
not done and BIND only listens at the interfaces detected at start-up.
Otherwise, the interval can be defined in minutes. The default is sixty
minutes.
no prevents other name servers from being informed when
changes are made to the zone data or when the name server is restarted.
For a list of available options, read the manual page man 5
named.conf.
What, how, and where logging takes place can be extensively configured in BIND. Normally, the default settings should be sufficient. Example 19.3, “Entry to Disable Logging”, shows the simplest form of such an entry and completely suppresses any logging.
logging {
category default { null; };
};zone "example.com" in {
type master;
file "example.com.zone";
notify no;
};
After zone, specify the name of the domain to
administer (example.com)
followed by in and a block of relevant options
enclosed in curly braces, as shown in Example 19.4, “Zone Entry for example.com”. To
define a slave zone, switch the
type to slave and specify a
name server that administers this zone as master (which,
in turn, may be a slave of another master), as shown in
Example 19.5, “Zone Entry for example.net”.
zone "example.net" in {
type slave;
file "slave/example.net.zone";
masters { 10.0.0.1; };
};The zone options:
By specifying master, tell BIND that the zone is
handled by the local name server. This assumes that a zone file has been
created in the correct format.
This zone is transferred from another name server. It must be used
together with masters.
The zone . of the hint type is
used to set the root name servers. This zone definition can be left as
is.
example.com.zone or file
“slave/example.net.zone”;
This entry specifies the file where zone data for the domain is located.
This file is not required for a slave, because this data is pulled from
another name server. To differentiate master and slave files, use the
directory slave for the slave files.
This entry is only needed for slave zones. It specifies from which name server the zone file should be transferred.
This option controls external write access, which would allow clients to
make a DNS entry—something not normally desirable for security
reasons. Without this entry, zone updates are not allowed. The above
entry achieves the same because ! * effectively bans
any such activity.
Two types of zone files are needed. One assigns IP addresses to host names and the other does the reverse: it supplies a host name for an IP address.
The "." has an important meaning in the zone files. If
host names are given without a final dot (.), the zone
is appended. Complete host names specified with a full domain name must end
with a dot (.) to avoid having the domain added to it
again. A missing or wrongly placed "." is probably the most frequent cause
of name server configuration errors.
The first case to consider is the zone file
example.com.zone, responsible for the domain
example.com, shown in
Example 19.6, “The /var/lib/named/example.com.zone File”.
1. $TTL 2D 2. example.com. IN SOA dns root.example.com. ( 3. 2003072441 ; serial 4. 1D ; refresh 5. 2H ; retry 6. 1W ; expiry 7. 2D ) ; minimum 8. 9. IN NS dns 10. IN MX 10 mail 11. 12. gate IN A 192.168.5.1 13. IN A 10.0.0.1 14. dns IN A 192.168.1.116 15. mail IN A 192.168.3.108 16. jupiter IN A 192.168.2.100 17. venus IN A 192.168.2.101 18. saturn IN A 192.168.2.102 19. mercury IN A 192.168.2.103 20. ntp IN CNAME dns 21. dns6 IN A6 0 2002:c0a8:174::
$TTL defines the default time to live that
should apply to all the entries in this file. In this example, entries
are valid for a period of two days (2 D).
This is where the SOA (start of authority) control record begins:
The name of the domain to administer is
example.com in the first position. This ends
with ".", because otherwise the zone would be
appended a second time. Alternatively, @ can be
entered here, in which case the zone would be extracted from the
corresponding entry in /etc/named.conf.
After IN SOA is the name of the name server in
charge as master for this zone. The name is expanded from
dns to dns.example.com, because it does
not end with a ".".
An e-mail address of the person in charge of this name server follows.
Because the @ sign already has a special meaning,
"." is entered here instead. For
root@example.com the entry must read
root.example.com.. The
"." must be included at the end to prevent the zone
from being added.
The ( includes all lines up to )
into the SOA record.
The serial number is an arbitrary number that is
increased each time this file is changed. It is needed to inform the
secondary name servers (slave servers) of changes. For this, a 10 digit
number of the date and run number, written as YYYYMMDDNN, has become the
customary format.
The refresh rate specifies the time interval at
which the secondary name servers verify the zone serial
number. In this case, one day.
The retry rate specifies the time interval at
which a secondary name server, in case of error, attempts to contact the
primary server again. Here, two hours.
The expiration time specifies the time frame
after which a secondary name server discards the cached data if it has
not regained contact to the primary server. Here, a week.
The last entry in the SOA record specifies the negative
caching TTL—the time for which results of unresolved
DNS queries from other servers may be cached.
The IN NS specifies the name server responsible
for this domain. dns is extended to
dns.example.com because it does not end with a
".". There can be several lines like this—one
for the primary and one for each secondary name server. If
notify is not set to no in
/etc/named.conf, all the name servers listed here
are informed of the changes made to the zone data.
The MX record specifies the mail server that accepts, processes, and
forwards e-mails for the domain
example.com. In this
example, this is the host
mail.example.com. The number in
front of the host name is the preference value. If there are multiple MX
entries, the mail server with the smallest value is taken first. If
mail delivery to this server fails, the next entry with
higher value is used.
These are the actual address records where one or more IP addresses are
assigned to host names. The names are listed here without a
"." because they do not include their domain, so
example.com is added to
all of them. Two IP addresses are assigned to the host
gate, as it has two network cards.
Wherever the host address is a traditional one (IPv4), the record is
marked with A. If the address is an IPv6 address, the
entry is marked with AAAA.
The IPv6 record has a slightly different syntax than IPv4. Because of the fragmentation possibility, it is necessary to provide information about missed bits before the address. To fill up the IPv6 address with the needed number of “0”, add two colons at the correct place in the address.
pluto AAAA 2345:00C1:CA11::1234:5678:9ABC:DEF0 pluto AAAA 2345:00D2:DA11::1234:5678:9ABC:DEF0
The alias ntp can be used to address
dns (CNAME means
canonical name).
The pseudo domain in-addr.arpa is used for the reverse
lookup of IP addresses into host names. It is appended to the network part
of the address in reverse notation. So
192.168 is resolved into
168.192.in-addr.arpa. See
Example 19.7, “Reverse Lookup”.
1. $TTL 2D 2. 168.192.in-addr.arpa. IN SOA dns.example.com. root.example.com. ( 3. 2003072441 ; serial 4. 1D ; refresh 5. 2H ; retry 6. 1W ; expiry 7. 2D ) ; minimum 8. 9. IN NS dns.example.com. 10. 11. 1.5 IN PTR gate.example.com. 12. 100.3 IN PTR www.example.com. 13. 253.2 IN PTR cups.example.com.
$TTL defines the standard TTL that applies to all entries here.
The configuration file should activate reverse lookup for the network
192.168. Given
that the zone is called 168.192.in-addr.arpa, it
should not be added to the host names. Therefore, all host names are
entered in their complete form—with their domain and with a
"." at the end. The remaining entries correspond to
those described for the previous example.com
example.
See the previous example for example.com.
Again this line specifies the name server responsible for this zone. This
time, however, the name is entered in its complete form with the domain
and a "." at the end.
These are the pointer records hinting at the IP addresses on the
respective hosts. Only the last part of the IP address is entered at the
beginning of the line, without the "." at the end.
Appending the zone to this (without the
.in-addr.arpa) results in the complete IP
address in reverse order.
Normally, zone transfers between different versions of BIND should be possible without any problems.
The term dynamic update refers to operations by which
entries in the zone files of a master server are added, changed, or deleted.
This mechanism is described in RFC 2136. Dynamic update is configured
individually for each zone entry by adding an optional
allow-update or
update-policy rule. Zones to update dynamically
should not be edited by hand.
Transmit the entries to update to the server with the command
nsupdate. For the exact syntax of this command, check the
manual page for nsupdate (man 8
nsupdate). For security reasons, any such update should be
performed using TSIG keys as described in Section 19.8, “Secure Transactions”.
Secure transactions can be made with transaction signatures (TSIGs) based on shared secret keys (also called TSIG keys). This section describes how to generate and use such keys.
Secure transactions are needed for communication between different servers and for the dynamic update of zone data. Making the access control dependent on keys is much more secure than merely relying on IP addresses.
Generate a TSIG key with the following command (for details, see
man dnssec-keygen):
tux >sudodnssec-keygen -a hmac-md5 -b 128 -n HOST host1-host2
This creates two files with names similar to these:
Khost1-host2.+157+34265.private Khost1-host2.+157+34265.key
The key itself (a string like ejIkuCyyGJwwuN3xAteKgg==)
is found in both files. To use it for transactions, the second file
(Khost1-host2.+157+34265.key) must be transferred to
the remote host, preferably in a secure way (using scp, for example). On the
remote server, the key must be included in the
/etc/named.conf file to enable a secure communication
between host1 and host2:
key host1-host2 {
algorithm hmac-md5;
secret "ejIkuCyyGJwwuN3xAteKgg==";
};/etc/named.conf
Make sure that the permissions of /etc/named.conf are
properly restricted. The default for this file is 0640,
with the owner being root and the
group named. As an alternative,
move the keys to an extra file with specially limited permissions, which is
then included from /etc/named.conf. To include an
external file, use:
include "filename"
Replace filename with an absolute path to your file with
keys.
To enable the server host1 to use the key for
host2 (which has the address 10.1.2.3
in this example), the server's /etc/named.conf must
include the following rule:
server 10.1.2.3 {
keys { host1-host2. ;};
};
Analogous entries must be included in the configuration files of
host2.
Add TSIG keys for any ACLs (access control lists, not to be confused with file system ACLs) that are defined for IP addresses and address ranges to enable transaction security. The corresponding entry could look like this:
allow-update { key host1-host2. ;};
This topic is discussed in more detail in the BIND Administrator
Reference Manual under update-policy.
DNSSEC, or DNS security, is described in RFC 2535. The tools available for DNSSEC are discussed in the BIND Manual.
A zone considered secure must have one or several zone keys associated with
it. These are generated with dnssec-keygen, as are the
host keys. The DSA encryption algorithm is currently used to generate these
keys. The public keys generated should be included in the corresponding zone
file with an $INCLUDE rule.
With the command dnssec-signzone, you can create sets of
generated keys (keyset- files), transfer them to the
parent zone in a secure manner, and sign them. This generates the files to
include for each zone in /etc/named.conf.
For more information, see the BIND Administrator Reference
Manual from the
bind-doc package, which is
installed under /usr/share/doc/packages/bind/arm.
Consider additionally consulting the RFCs referenced by the manual and the
manual pages included with BIND.
/usr/share/doc/packages/bind/README.SUSE contains
up-to-date information about BIND in openSUSE Leap.
The purpose of the Dynamic Host Configuration Protocol (DHCP) is to assign network settings centrally (from a server) rather than configuring them locally on every workstation. A host configured to use DHCP does not have control over its own static address. It is enabled to configure itself completely and automatically according to directions from the server. If you use the NetworkManager on the client side, you do not need to configure the client. This is useful if you have changing environments and only one interface active at a time. Never use NetworkManager on a machine that runs a DHCP server.
One way to configure a DHCP server is to identify each client using the hardware address of its network card (which should be fixed in most cases), then supply that client with identical settings each time it connects to the server. DHCP can also be configured to assign addresses to each relevant client dynamically from an address pool set up for this purpose. In the latter case, the DHCP server tries to assign the same address to the client each time it receives a request, even over extended periods. This works only if the network does not have more clients than addresses.
DHCP makes life easier for system administrators. Any changes, even bigger ones, related to addresses and the network configuration in general can be implemented centrally by editing the server's configuration file. This is much more convenient than reconfiguring numerous workstations. It is also much easier to integrate machines, particularly new machines, into the network, because they can be given an IP address from the pool. Retrieving the appropriate network settings from a DHCP server is especially useful in case of laptops regularly used in different networks.
In this chapter, the DHCP server will run in the same subnet as the
workstations, 192.168.2.0/24 with
192.168.2.1 as gateway. It has
the fixed IP address 192.168.2.254 and
serves two address ranges,
192.168.2.10 to
192.168.2.20 and
192.168.2.100 to
192.168.2.200.
A DHCP server supplies not only the IP address and the netmask, but also the host name, domain name, gateway, and name server addresses for the client to use. In addition to that, DHCP allows several other parameters to be configured in a centralized way, for example, a time server from which clients may poll the current time or even a print server.
To install a DHCP server, start YaST and select › . Choose › and select . Confirm the installation of the dependent packages to finish the installation process.
The YaST DHCP module can be set up to store the server configuration locally (on the host that runs the DHCP server) or to have its configuration data managed by an LDAP server. To use LDAP, set up your LDAP environment before configuring the DHCP server.
For more information about LDAP, see Chapter 5, LDAP—A Directory Service.
The YaST DHCP module (yast2-dhcp-server) allows
you to set up your own DHCP server for the local network. The module can run
in wizard mode or expert configuration mode.
When the module is started for the first time, a wizard starts, prompting you to make a few basic decisions concerning server administration. Completing this initial setup produces a very basic server configuration that should function in its essential aspects. The expert mode can be used to deal with more advanced configuration tasks. Proceed as follows:
Select the interface from the list to which the DHCP server should listen and click . After this, select to open the firewall for this interface, and click . See Figure 20.1, “DHCP Server: Card Selection”.
Use the check box to determine whether your DHCP settings should be automatically stored by an LDAP server. In the text boxes, provide the network specifics for all clients the DHCP server should manage. These specifics are the domain name, address of a time server, addresses of the primary and secondary name server, addresses of a print and a WINS server (for a mixed network with both Windows and Linux clients), gateway address, and lease time. See Figure 20.2, “DHCP Server: Global Settings”.
Configure how dynamic IP addresses should be assigned to clients. To do so, specify an IP range from which the server can assign addresses to DHCP clients. All these addresses must be covered by the same netmask. Also specify the lease time during which a client may keep its IP address without needing to request an extension of the lease. Optionally, specify the maximum lease time—the period during which the server reserves an IP address for a particular client. See Figure 20.3, “DHCP Server: Dynamic DHCP”.
Define how the DHCP server should be started. Specify whether to start the DHCP server automatically when the system is booted or manually when needed (for example, for testing purposes). Click to complete the configuration of the server. See Figure 20.4, “DHCP Server: Start-Up”.
Instead of using dynamic DHCP in the way described in the preceding steps, you can also configure the server to assign addresses in quasi-static fashion. Use the text boxes provided in the lower part to specify a list of the clients to manage in this way. Specifically, provide the and the to give to such a client, the , and the (token ring or Ethernet). Modify the list of clients, which is shown in the upper part with , , and . See Figure 20.5, “DHCP Server: Host Management”.
In addition to the configuration method discussed earlier, there is also an expert configuration mode that allows you to change the DHCP server setup in every detail. Start the expert configuration by clicking in the dialog (see Figure 20.4, “DHCP Server: Start-Up”).
In this first dialog, make the existing configuration editable by selecting . An important feature of the behavior of the DHCP server is its ability to run in a chroot environment, or chroot jail, to secure the server host. If the DHCP server should ever be compromised by an outside attack, the attacker will still be in the chroot jail, which prevents him from accessing the rest of the system. The lower part of the dialog displays a tree view with the declarations that have already been defined. Modify these with , , and . Selecting takes you to additional expert dialogs. See Figure 20.6, “DHCP Server: Chroot Jail and Declarations”. After selecting , define the type of declaration to add. With , view the log file of the server, configure TSIG key management, and adjust the configuration of the firewall according to the setup of the DHCP server.
The of the DHCP server are made up of several declarations. This dialog lets you set the declaration types , , , , , and . This example shows the selection of a new subnet (see Figure 20.7, “DHCP Server: Selecting a Declaration Type”).
This dialog allows you specify a new subnet with its IP address and netmask. In the middle part of the dialog, modify the DHCP server start options for the selected subnet using , , and . To set up dynamic DNS for the subnet, select .
If you chose to configure dynamic DNS in the previous dialog, you can now configure the key management for a secure zone transfer. Selecting takes you to another dialog in which to configure the interface for dynamic DNS (see Figure 20.10, “DHCP Server: Interface Configuration for Dynamic DNS”).
You can now activate dynamic DNS for the subnet by selecting . After doing so, use the drop-down box to activate the TSIG keys for forward and reverse zones, making sure that the keys are the same for the DNS and the DHCP server. With , enable the automatic update and adjustment of the global DHCP server settings according to the dynamic DNS environment. Finally, define which forward and reverse zones should be updated per dynamic DNS, specifying the name of the primary name server for each of the two zones. Selecting returns to the subnet configuration dialog (see Figure 20.8, “DHCP Server: Configuring Subnets”). Selecting again returns to the original expert configuration dialog.
To define the interfaces the DHCP server should listen to and to adjust the firewall configuration, select › from the expert configuration dialog. From the list of interfaces displayed, select one or more that should be attended by the DHCP server. If clients in all subnets need to be able to communicate with the server and the server host also runs a firewall, adjust the firewall accordingly. To do so, select . YaST then adjusts the rules of SuSEfirewall2 to the new conditions (see Figure 20.11, “DHCP Server: Network Interface and Firewall”), after which you can return to the original dialog by selecting .
After completing all configuration steps, close the dialog with . The server is now started with its new configuration.
Both the DHCP server and the DHCP clients are available for
openSUSE Leap. The DHCP server available is dhcpd (published by the Internet Systems
Consortium).
On the client side, there is dhcp-client (also from
ISC) and tools coming with the wicked package.
By default, the wicked tools are installed with the
services wickedd-dhcp4 and
wickedd-dhcp6. Both are launched automatically on
each system boot to watch for a DHCP server. They do not need a
configuration file to do their job and work out of the box in most standard
setups. For more complex situations, use the ISC
dhcp-client, which is controlled by means of the
configuration files /etc/dhclient.conf and
/etc/dhclient6.conf.
The core of any DHCP system is the dynamic host configuration protocol
daemon. This server leases addresses and watches how
they are used, according to the settings defined in the configuration file
/etc/dhcpd.conf. By changing the parameters and values
in this file, a system administrator can influence the program's behavior in
numerous ways. Look at the basic sample /etc/dhcpd.conf
file in Example 20.1, “The Configuration File /etc/dhcpd.conf”.
default-lease-time 600; # 10 minutes
max-lease-time 7200; # 2 hours
option domain-name "example.com";
option domain-name-servers 192.168.1.116;
option broadcast-address 192.168.2.255;
option routers 192.168.2.1;
option subnet-mask 255.255.255.0;
subnet 192.168.2.0 netmask 255.255.255.0
{
range 192.168.2.10 192.168.2.20;
range 192.168.2.100 192.168.2.200;
}This simple configuration file should be sufficient to get the DHCP server to assign IP addresses in the network. Make sure that a semicolon is inserted at the end of each line, because otherwise dhcpd is not started.
The sample file can be divided into three sections. The first one defines
how many seconds an IP address is leased to a requesting client by default
(default-lease-time) before it should apply for renewal.
This section also includes a statement of the maximum period for which a
machine may keep an IP address assigned by the DHCP server without applying
for renewal (max-lease-time).
In the second part, some basic network parameters are defined on a global level:
The line option domain-name defines the default domain
of your network.
With the entry option domain-name-servers, specify up
to three values for the DNS servers used to resolve IP addresses into host
names and vice versa. Ideally, configure a name server on your machine or
somewhere else in your network before setting up DHCP. That name server
should also define a host name for each dynamic address and vice versa. To
learn how to configure your own name server, read
Chapter 19, The Domain Name System.
The line option broadcast-address defines the broadcast
address the requesting client should use.
With option routers, set where the server should send
data packets that cannot be delivered to a host on the local network
(according to the source and target host address and the subnet mask
provided). Usually, especially in smaller networks, this router is
identical to the Internet gateway.
With option subnet-mask, specify the netmask assigned
to clients.
The last section of the file defines a network, including a subnet mask. To
finish, specify the address range that the DHCP daemon should use to assign
IP addresses to interested clients. In Example 20.1, “The Configuration File /etc/dhcpd.conf”,
clients may be given any address between 192.168.2.10
and 192.168.2.20 or 192.168.2.100
and 192.168.2.200.
After editing these few lines, you should be able to activate the DHCP
daemon with the command systemctl start dhcpd. It will be
ready for use immediately. Use the command
rcdhcpd check-syntax
to perform a brief syntax check. If you encounter any unexpected problems
with your configuration (the server aborts with an error or does not return
done on start), you should be able to find out what has
gone wrong by looking for information either in the main system log that can
be queried with the command journalctl (see
Chapter 11, journalctl: Query the systemd Journal for more information).
On a default openSUSE Leap system, the DHCP daemon is started in a chroot
environment for security reasons. The configuration files must be copied to
the chroot environment so the daemon can find them. Normally, there is no
need to worry about this because the command systemctl start dhcpd
automatically copies the files.
DHCP can also be used to assign a predefined, static address to a specific client. Addresses assigned explicitly always take priority over dynamic addresses from the pool. A static address never expires in the way a dynamic address would, for example, if there were not enough addresses available and the server needed to redistribute them among clients.
To identify a client configured with a static address, dhcpd uses the
hardware address (which is a globally unique, fixed numerical code
consisting of six octet pairs) for the identification of all network
devices (for example, 00:30:6E:08:EC:80). If the respective
lines, like the ones in Example 20.2, “Additions to the Configuration File”, are added to
the configuration file of Example 20.1, “The Configuration File /etc/dhcpd.conf”, the DHCP daemon
always assigns the same set of data to the corresponding client.
host jupiter {
hardware ethernet 00:30:6E:08:EC:80;
fixed-address 192.168.2.100;
}
The name of the respective client (host
HOSTNAME, here jupiter)
is entered in the first line and the MAC address in the second line. On
Linux hosts, find the MAC address with the command ip
link show followed by the network device (for example,
eth0). The output should contain something like
link/ether 00:30:6E:08:EC:80
In the preceding example, a client with a network card having the MAC
address 00:30:6E:08:EC:80 is assigned the IP address
192.168.2.100 and the host name
jupiter automatically. The type of hardware to enter is
ethernet in nearly all cases, although
token-ring, which is often found on IBM systems, is also
supported.
To improve security, the openSUSE Leap version of the ISC's DHCP server
comes with the non-root/chroot patch by Ari Edelkind applied. This enables
dhcpd to run with the user ID
nobody and run in a chroot
environment (/var/lib/dhcp). To make this possible,
the configuration file dhcpd.conf must be located in
/var/lib/dhcp/etc. The init script automatically
copies the file to this directory when starting.
Control the server's behavior regarding this feature by means of entries in
the file /etc/sysconfig/dhcpd. To run dhcpd without
the chroot environment, set the variable
DHCPD_RUN_CHROOTED in
/etc/sysconfig/dhcpd to “no”.
To enable dhcpd to resolve host names even from within the chroot environment, some other configuration files must be copied as well:
/etc/localtime
/etc/host.conf
/etc/hosts
/etc/resolv.conf
These files are copied to /var/lib/dhcp/etc/ when
starting the init script. Take these copies into account for any changes
that they require if they are dynamically modified by scripts like
/etc/ppp/ip-up. However, there should be no need to
worry about this if the configuration file only specifies IP addresses
(instead of host names).
If your configuration includes additional files that should be copied into
the chroot environment, set these under the variable
DHCPD_CONF_INCLUDE_FILES in the file
/etc/sysconfig/dhcpd. To ensure that the DHCP logging
facility keeps working even after a restart of the syslog daemon, there is
an additional entry SYSLOGD_ADDITIONAL_SOCKET_DHCP
in the file /etc/sysconfig/syslog.
More information about DHCP is available at the Web site of the
Internet Systems Consortium
(http://www.isc.org/products/DHCP/). Information is
also available in the dhcpd, dhcpd.conf,
dhcpd.leases, and dhcp-options man pages.
Using Samba, a Unix machine can be configured as a file and print server for macOS, Windows, and OS/2 machines. Samba has developed into a fully-fledged and rather complex product. Configure Samba with YaST, or by editing the configuration file manually.
The following are some terms used in Samba documentation and in the YaST module.
Samba uses the SMB (server message block) protocol that is based on the NetBIOS services. Microsoft released the protocol so other software manufacturers could establish connections to a Microsoft domain network. With Samba, the SMB protocol works on top of the TCP/IP protocol, so the TCP/IP protocol must be installed on all clients.
CIFS (common Internet file system) protocol is another protocol supported by Samba. CIFS defines a standard remote file system access protocol for use over the network, enabling groups of users to work together and share documents across the network.
NetBIOS is a software interface (API) designed for communication between machines providing a name service. It enables machines connected to the network to reserve names for themselves. After reservation, these machines can be addressed by name. There is no central process that checks names. Any machine on the network can reserve as many names as it wants as long as the names are not already in use. The NetBIOS interface can be implemented for different network architectures. An implementation that works relatively closely with network hardware is called NetBEUI, but this is often called NetBIOS. Network protocols implemented with NetBIOS are IPX from Novell (NetBIOS via TCP/IP) and TCP/IP.
The NetBIOS names sent via TCP/IP have nothing in common with the names
used in /etc/hosts or those defined by DNS. NetBIOS
uses its own, completely independent naming convention. However, it is
recommended to use names that correspond to DNS host names to make
administration easier or use DNS natively. This is the default used by
Samba.
Samba server provides SMB/CIFS services and NetBIOS over IP naming services to clients. For Linux, there are three daemons for Samba server: smbd for SMB/CIFS services, nmbd for naming services, and winbind for authentication.
The Samba client is a system that uses Samba services from a Samba server over the SMB protocol. Common operating systems, such as Windows and macOS support the SMB protocol. The TCP/IP protocol must be installed on all computers. Samba provides a client for the different Unix flavors. For Linux, there is a kernel module for SMB that allows the integration of SMB resources on the Linux system level. You do not need to run any daemon for the Samba client.
SMB servers provide resources to the clients by means of shares. Shares are printers and directories with their subdirectories on the server. It is exported by means of a name and can be accessed by its name. The share name can be set to any name—it does not need to be the name of the export directory. A printer is also assigned a name. Clients can access the printer by its name.
A domain controller (DC) is a server that handles accounts in a domain. For data replication, additional domain controllers are available in one domain.
To install a Samba server, start YaST and select › . Choose › and select . Confirm the installation of the required packages to finish the installation process.
You can start or stop the Samba server automatically (during boot) or manually. Starting and stopping policy is a part of the YaST Samba server configuration described in Section 21.4.1, “Configuring a Samba Server with YaST”.
From a command line, stop services required for Samba with
systemctl stop smb nmb and start them with
systemctl start nmb smb. The smb
service cares about winbind if needed.
winbind
winbind is an independent service, and as such is
also offered as an individual samba-winbind
package.
A Samba server in openSUSE® Leap can be configured in two different ways: with YaST or manually. Manual configuration offers a higher level of detail, but lacks the convenience of the YaST GUI.
To configure a Samba server, start YaST and select › .
When starting the module for the first time, the dialog starts, prompting you to make a few basic decisions concerning administration of the server. At the end of the configuration it prompts for the Samba administrator password (). For later starts, the dialog appears.
The dialog consists of two steps and optional detailed settings:
Select an existing name from or enter a new one and click .
In the next step, specify whether your server should act as a primary domain controller (PDC), backup domain controller (BDC), or not act as a domain controller. Continue with .
If you do not want to proceed with a detailed server configuration, confirm with . Then in the final pop-up box, set the .
You can change all settings later in the dialog with the , , , , and tabs.
During the first start of the Samba server module the dialog appears directly after the two initial steps described in Section 21.4.1.1, “Initial Samba Configuration”. Use it to adjust your Samba server configuration.
After editing your configuration, click to save your settings.
In the tab, configure the start of the Samba server. To start the service every time your system boots, select . To activate manual start, choose . More information about starting a Samba server is provided in Section 21.3, “Starting and Stopping Samba”.
In this tab, you can also open ports in your firewall. To do so, select . If you have multiple network interfaces, select the network interface for Samba services by clicking , selecting the interfaces, and clicking .
In the tab, you can determine the domain with which the host is associated () and whether to use an alternative host name in the network (). It is also possible to use Microsoft Windows Internet Name Service (WINS) for name resolution. In this case, activate and decide whether to . To set expert global settings or set a user authentication source, for example LDAP instead of TDB database, click .
To enable users from other domains to access your domain, make the appropriate settings in the tab. To add a new domain, click . To remove the selected domain, click .
In the tab , you can determine the LDAP server to use for authentication. To test the connection to your LDAP server, click . To set expert LDAP settings or use default values, click .
For more information about LDAP configuration, see Chapter 5, LDAP—A Directory Service.
If you intend to use Samba as a server, install
samba. The main configuration
file for Samba is /etc/samba/smb.conf. This file can
be divided into two logical parts. The [global] section
contains the central and global settings. The following default sections
contain the individual file and printer shares:
[homes]
[profiles]
[users]
[groups]
[printers]
[print$]
Using this approach, options of the shares can be set
differently or globally in the [global] section, which
makes the configuration file easier to understand.
The following parameters of the [global] section should
be modified to match the requirements of your network setup, so other
machines can access your Samba server via SMB in a Windows environment.
workgroup = WORKGROUP
This line assigns the Samba server to a workgroup. Replace
WORKGROUP with an appropriate workgroup of your
networking environment. Your Samba server appears under its DNS name
unless this name has been assigned to some other machine in the
network. If the DNS name is not available, set the server name using
netbiosname=MYNAME. For
more details about this parameter, see the
smb.conf man page.
os level = 20
This parameter triggers whether your Samba server tries to become LMB
(local master browser) for its workgroup. Choose a very low value such
as 2 to spare the existing Windows network from any
interruptions caused by a misconfigured Samba server. More information
about this topic can be found in the Network Browsing chapter
of the Samba 3 Howto; for more information on the Samba 3 Howto, see
Section 21.9, “For More Information”.
If no other SMB server is in your network (such as a Windows 2000
server) and you want the Samba server to keep a list of all systems
present in the local environment, set the os level
to a higher value (for example, 65). Your Samba
server is then chosen as LMB for your local network.
When changing this setting, consider carefully how this could affect an existing Windows network environment. First test the changes in an isolated network or at a noncritical time of day.
wins support and wins server
To integrate your Samba server into an existing Windows network with an
active WINS server, enable the wins server option and
set its value to the IP address of that WINS server.
If your Windows machines are connected to separate subnets and need to
still be aware of each other, you have to set up a WINS server. To turn
a Samba server into such a WINS server, set the option wins
support = Yes. Make sure that only one Samba server of the
network has this setting enabled. The options wins
server and wins support must never be
enabled at the same time in your smb.conf file.
To improve security, each share access can be protected with a password. SMB offers the following ways of checking permissions:
security = user)This variant introduces the concept of the user to SMB. Each user must register with the server with his or her own password. After registration, the server can grant access to individual exported shares dependent on user names.
security = ADS)In this mode, Samba will act as a domain member in an Active Directory environment. To operate in this mode, the machine running Samba needs Kerberos installed and configured. You must join the machine using Samba to the ADS realm. This can be done using the YaST module.
security = domain)
This mode will only work correctly if the machine has been joined into
a Windows NT Domain. Samba will try to validate user name and password
by passing it to a Windows NT Primary or Backup Domain Controller. The
same way as a Windows NT Server would do. It expects the encrypted
passwords parameter to be set to yes.
The selection of share, user, server, or domain level security applies to the entire server. It is not possible to offer individual shares of a server configuration with share level security and others with user level security. However, you can run a separate Samba server for each configured IP address on a system.
More information about this subject can be found in the Samba 3 HOWTO.
For multiple servers on one system, pay attention to the options
interfaces and bind interfaces only.
Clients can only access the Samba server via TCP/IP. NetBEUI and NetBIOS via IPX cannot be used with Samba.
Configure a Samba client to access resources (files or printers) on the Samba or Windows server. Enter the NT or Active Directory domain or workgroup in the dialog › . If you activate , the user authentication runs over the Samba, NT or Kerberos server.
Click for advanced configuration
options. For example, use the
table to enable mounting server home directory automatically with
authentication. This way users can access their home directories when
hosted on CIFS. For details, see the pam_mount man
page.
After completing all settings, confirm the dialog to finish the configuration.
In networks where predominantly Windows clients are found, it is often
preferable that users may only register with a valid account and password.
In a Windows-based network, this task is handled by a primary domain
controller (PDC). You can use a Windows NT server configured as PDC, but
this task can also be done with a Samba server. The entries that must be
made in the [global] section of
smb.conf are shown in
Example 21.3, “Global Section in smb.conf”.
[global]
workgroup = WORKGROUP
domain logons = Yes
domain master = Yes
It is necessary to prepare user accounts and passwords in an encryption
format that conforms with Windows. Do this with the command
smbpasswd -a name. Create the domain
account for the computers, required by the Windows domain concept, with the
following commands:
useradd hostname\$ smbpasswd -a -m hostname
With the useradd command, a dollar sign is added. The
command smbpasswd inserts this automatically when the
parameter -m is used. The commented configuration example
(/usr/share/doc/packages/samba/examples/smb.conf.SUSE)
contains settings that automate this task.
add machine script = /usr/sbin/useradd -g nogroup -c "NT Machine Account" \ -s /bin/false %m\$
To make sure that Samba can execute this script correctly, choose a Samba
user with the required administrator permissions and add it to the
ntadmin group. Then all users
belonging to this Linux group can be assigned Domain
Admin status with the command:
net groupmap add ntgroup="Domain Admins" unixgroup=ntadmin
If you run Linux servers and Windows servers together, you can build two independent authentication systems and networks or connect servers to one network with one central authentication system. Because Samba can cooperate with an active directory domain, you can join your SUSE Linux Enterprise Server to Active Directory (AD).
To join an AD domain proceed as follows:
Log in as root and start YaST.
Start › .
Enter the domain to join at in the screen.
Check to use the SMB source for Linux authentication on your server.
Click and confirm the domain join when prompted for it.
Provide the password for the Windows Administrator on the AD server and click .
Your server is now set up to pull in all authentication data from the Active Directory domain controller.
In an environment with more than one Samba server, UIDs and GIDs will not be created consistently. The UIDs that get assigned to users will be dependent on the order in which they first log in, which results in UID conflicts across servers. To fix this, you need to use identity mapping. See https://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/idmapper.html for more details.
This section introduces more advanced techniques to manage both the client and server part of the Samba suite.
Samba allows clients to remotely manipulate file and directory compression flags for shares placed on the Btrfs file system. Windows Explorer provides the ability to flag files/directories for transparent compression via the › › dialog:
Files flagged for compression are transparently compressed and decompressed by the underlying file system when accessed or modified. This normally results in storage capacity savings at the expense of extra CPU overhead when accessing the file. New files and directories inherit the compression flag from the parent directory, unless created with the FILE_NO_COMPRESSION option.
Windows Explorer presents compressed files and directories visually differently to those that are not compressed:
You can enable Samba share compression either manually by adding
vfs objects = btrfs
to the share configuration in /etc/samba/smb.conf, or
using YaST: › › , and checking
.
Snapshots, also called Shadow Copies, are copies of the state of a file system subvolume at a certain point of time. Snapper is the tool to manage these snapshots in Linux. Snapshots are supported on the Btrfs file system or thin-provisioned LVM volumes. The Samba suite supports managing of remote snapshots through the FSRVP protocol on both the server and client side.
Snapshots on a Samba server can be exposed to remote Windows clients as file or directory previous versions.
To enable snapshots on a Samba server, the following conditions must be fulfilled:
The SMB network share resides on a Btrfs subvolume.
The SMB network share path has a related snapper configuration file. You can create the snapper file with
tux >sudosnapper -c <cfg_name> create-config/path/to/share
For more information on snapper, see Chapter 3, System Recovery and Snapshot Management with Snapper.
The snapshot directory tree must allow access for relevant users. For
more information, see the PERMISSIONS section of the vfs_snapper manual
page (man 8 vfs_snapper).
To support remote snapshots, you need to modify the
/etc/samba/smb.conf file. You can do it either with
› › , or
manually by enhancing the relevant share section with
vfs objects = snapper
Note that you need to restart the Samba service for manual
smb.conf changes to take effect:
tux >sudosystemctl restart nmb smb
After being configured, snapshots created by snapper for the Samba share path can be accessed from Windows Explorer from a file or directory's tab.
By default, snapshots can only be created and deleted on the Samba server locally, via the snapper command line utility, or using snapper's time line feature.
Samba can be configured to process share snapshot creation and deletion requests from remote hosts using the File Server Remote VSS Protocol (FSRVP).
In addition to the configuration and prerequisites documented in
Section 21.8.2.1, “Previous Versions”, the following global
configuration is required in /etc/samba/smb.conf:
[global] rpc_daemon:fssd = fork registry shares = yes include = registry
FSRVP clients, including Samba's rpcclient and Windows
Server 2012 DiskShadow.exe, can then instruct Samba to
create or delete a snapshot for a given share, and expose the snapshot as
a new share.
rpcclient #
The samba-client package contains an FSRVP client
that can remotely request a Windows/Samba server to create and expose a
snapshot of a given share. You can then use existing tools in
openSUSE Leap to mount the exposed share and back up its files. Requests
to the server are sent using the rpcclient binary.
rpcclient to Request a Windows Server 2012 Share Snapshot #
Connect to win-server.example.com server as an
administrator in an EXAMPLE domain:
root # rpcclient -U 'EXAMPLE\Administrator' ncacn_np:win-server.example.com[ndr64,sign]
Enter EXAMPLE/Administrator's password:
Check that the SMB share is visible for rpcclient:
root # rpcclient $> netshareenum
netname: windows_server_2012_share
remark:
path: C:\Shares\windows_server_2012_share
password: (null)Check that the SMB share supports snapshot creation:
root # rpcclient $> fss_is_path_sup windows_server_2012_share \
UNC \\WIN-SERVER\windows_server_2012_share\ supports shadow copy requestsRequest the creation of a share snapshot:
root # rpcclient $> fss_create_expose backup ro windows_server_2012_share
13fe880e-e232-493d-87e9-402f21019fb6: shadow-copy set created
13fe880e-e232-493d-87e9-402f21019fb6(1c26544e-8251-445f-be89-d1e0a3938777): \
\\WIN-SERVER\windows_server_2012_share\ shadow-copy added to set
13fe880e-e232-493d-87e9-402f21019fb6: prepare completed in 0 secs
13fe880e-e232-493d-87e9-402f21019fb6: commit completed in 1 secs
13fe880e-e232-493d-87e9-402f21019fb6(1c26544e-8251-445f-be89-d1e0a3938777): \
share windows_server_2012_share@{1C26544E-8251-445F-BE89-D1E0A3938777} \
exposed as a snapshot of \\WIN-SERVER\windows_server_2012_share\Confirm that the snapshot share is exposed by the server:
root # rpcclient $> netshareenum
netname: windows_server_2012_share
remark:
path: C:\Shares\windows_server_2012_share
password: (null)
netname: windows_server_2012_share@{1C26544E-8251-445F-BE89-D1E0A3938777}
remark: (null)
path: \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy{F6E6507E-F537-11E3-9404-B8AC6F927453}\Shares\windows_server_2012_share\
password: (null)Attempt to delete the snapshot share:
root # rpcclient $> fss_delete windows_server_2012_share \
13fe880e-e232-493d-87e9-402f21019fb6 1c26544e-8251-445f-be89-d1e0a3938777
13fe880e-e232-493d-87e9-402f21019fb6(1c26544e-8251-445f-be89-d1e0a3938777): \
\\WIN-SERVER\windows_server_2012_share\ shadow-copy deletedConfirm that the snapshot share has been removed by the server:
root # rpcclient $> netshareenum
netname: windows_server_2012_share
remark:
path: C:\Shares\windows_server_2012_share
password: (null)DiskShadow.exe #
You can manage snapshots of SMB shares on the Linux Samba server from the
Windows environment acting as a client as well. Windows Server 2012
includes the DiskShadow.exe utility that can manage
remote shares similar to the rpcclient described in
Section 21.8.2.3, “Managing Snapshots Remotely from Linux with rpcclient”. Note that you
need to carefully set up the Samba server first.
Following is an example procedure to set up the Samba server so that the
Windows Server client can manage its share's snapshots. Note that EXAMPLE
is the Active Directory domain used in the testing environment,
fsrvp-server.example.com is the host name of the Samba server, and
/srv/smb is the path to the SMB share.
Join Active Directory domain via YaST. For more information, Section 21.7, “Samba Server in the Network with Active Directory”.
Ensure that the Active Domain DNS entry was correct:
fsrvp-server:~ # net -U 'Administrator' ads dns register \ fsrvp-server.example.com <IP address> Successfully registered hostname with DNS
Create Btrfs subvolume at /srv/smb
fsrvp-server:~ # btrfs subvolume create /srv/smb
Create snapper configuration file for path /srv/smb
fsrvp-server:~ # snapper -c <snapper_config> create-config /srv/smb
Create new share with path /srv/smb, and YaST
check box enabled. Make sure to add
the following snippets to the global section of
/etc/samba/smb.conf as mentioned in
Section 21.8.2.2, “Remote Share Snapshots”:
[global] rpc_daemon:fssd = fork registry shares = yes include = registry
Restart Samba with systemctl restart nmb smb
Configure snapper permissions:
fsrvp-server:~ # snapper -c <snapper_config> set-config \ ALLOW_USERS="EXAMPLE\\\\Administrator EXAMPLE\\\\win-client$"
Ensure that any ALLOW_USERS are also permitted traversal of the
.snapshots subdirectory.
fsrvp-server:~ # snapper -c <snapper_config> set-config SYNC_ACL=yes
Be careful about the '\' escapes! Escape twice to ensure that the value
stored in
/etc/snapper/configs/<snapper_config> is
escaped once.
"EXAMPLE\win-client$" corresponds to the Windows client computer account. Windows issues initial FSRVP requests while authenticated with this account.
Grant Windows client account necessary privileges:
fsrvp-server:~ # net -U 'Administrator' rpc rights grant \ "EXAMPLE\\win-client$" SeBackupPrivilege Successfully granted rights.
The previous command is not needed for the "EXAMPLE\Administrator" user, which has privileges already granted.
DiskShadow.exe in Action #Boot Windows Server 2012 (example host name WIN-CLIENT).
Join the same Active Directory domain EXAMPLE as with the openSUSE Leap.
Reboot.
Open Powershell.
Start DiskShadow.exe and begin the backup procedure:
PS C:\Users\Administrator.EXAMPLE> diskshadow.exe Microsoft DiskShadow version 1.0 Copyright (C) 2012 Microsoft Corporation On computer: WIN-CLIENT, 6/17/2014 3:53:54 PM DISKSHADOW> begin backup
Specify that shadow copy persists across program exit, reset or reboot:
DISKSHADOW> set context PERSISTENT
Check whether the specified share supports snapshots, and create one:
DISKSHADOW> add volume \\fsrvp-server\sles_snapper
DISKSHADOW> create
Alias VSS_SHADOW_1 for shadow ID {de4ddca4-4978-4805-8776-cdf82d190a4a} set as \
environment variable.
Alias VSS_SHADOW_SET for shadow set ID {c58e1452-c554-400e-a266-d11d5c837cb1} \
set as environment variable.
Querying all shadow copies with the shadow copy set ID \
{c58e1452-c554-400e-a266-d11d5c837cb1}
* Shadow copy ID = {de4ddca4-4978-4805-8776-cdf82d190a4a} %VSS_SHADOW_1%
- Shadow copy set: {c58e1452-c554-400e-a266-d11d5c837cb1} %VSS_SHADOW_SET%
- Original count of shadow copies = 1
- Original volume name: \\FSRVP-SERVER\SLES_SNAPPER\ \
[volume not on this machine]
- Creation time: 6/17/2014 3:54:43 PM
- Shadow copy device name:
\\FSRVP-SERVER\SLES_SNAPPER@{31afd84a-44a7-41be-b9b0-751898756faa}
- Originating machine: FSRVP-SERVER
- Service machine: win-client.example.com
- Not exposed
- Provider ID: {89300202-3cec-4981-9171-19f59559e0f2}
- Attributes: No_Auto_Release Persistent FileShare
Number of shadow copies listed: 1Finish the backup procedure:
DISKSHADOW> end backup
After the snapshot was created, try to delete it and verify the deletion:
DISKSHADOW> delete shadows volume \\FSRVP-SERVER\SLES_SNAPPER\
Deleting shadow copy {de4ddca4-4978-4805-8776-cdf82d190a4a} on volume \
\\FSRVP-SERVER\SLES_SNAPPER\ from provider \
{89300202-3cec-4981-9171-19f59559e0f2} [Attributes: 0x04000009]...
Number of shadow copies deleted: 1
DISKSHADOW> list shadows all
Querying all shadow copies on the computer ...
No shadow copies found in system.
Documentation for Samba ships with the samba-doc
package which is not installed by default. Install it with zypper
install samba-doc. Enter apropos
samba at the command line to display some manual pages or
browse the /usr/share/doc/packages/samba directory for
more online documentation and examples. Find a commented example
configuration (smb.conf.SUSE) in the
examples subdirectory. Another file to look for Samba
related information is
/usr/share/doc/packages/samba/README.SUSE.
The Samba HOWTO (see https://wiki.samba.org) provided by the Samba team includes a section about troubleshooting. In addition to that, Part V of the document provides a step-by-step guide to checking your configuration.
The Network File System (NFS) is a standardized, well-proven and widely supported network protocol that allows files to be shared between separate hosts.
The Network Information Service (NIS) can be used to have a centralized user management in the network. Combining NFS and NIS allows using file and directory permissions for access control in the network. NFS with NIS makes a network transparent to the user.
In the default configuration, NFS completely trusts the network and thus any machine which is connected to a trusted network. Any user with administrator privileges on any computer with physical access to any network which the NFS server trusts at all, can access any files that the server makes available.
In many cases, this level of security is perfectly satisfactory, such as when the network that is trusted is truly private, often localized to a single cabinet or machine room, and no unauthorized access is possible. In other cases the need to trust a whole subnet as a unit is restrictive and there is a need for more fine-grained trust. To meet the need in these cases, NFS supports various security levels using the Kerberos infrastructure. Kerberos requires NFSv4, which is used by default. For details, see Chapter 6, Network Authentication with Kerberos.
The following are terms used in the YaST module.
A directory exported by an NFS server, which clients can integrate it into their system.
The NFS client is a system that uses NFS services from an NFS server over the Network File System protocol. The TCP/IP protocol is already integrated into the Linux kernel; there is no need to install any additional software.
The NFS server provides NFS services to clients. A running server depends
on the following daemons: nfsd
(worker), idmapd (ID-to-name
mapping for NFSv4, needed for certain scenarios only), statd (file locking), and mountd (mount requests).
NFSv3 is the version 3 implementation, the “old” stateless NFS that supports client authentication.
NFSv4 is the new version 4 implementation that supports secure user authentication via kerberos. NFSv4 requires one single port only and thus is better suited for environments behind a firewall than NFSv3.
The protocol is specified as http://tools.ietf.org/html/rfc3530.
Parallel NFS, a protocol extension of NFSv4. Any pNFS clients can directly access the data on an NFS server.
In principle, all exports can be made using IP addresses only. To avoid time-outs, you need a working DNS system. DNS is necessary at least for logging purposes, because the mountd daemon does reverse lookups.
The NFS server is not part of the default installation. To install the NFS server using YaST, choose › , select , and enable the option in the section. Press to install the required packages.
Like NIS, NFS is a client/server system. However, a machine can be both—it can supply file systems over the network (export) and mount file systems from other hosts (import).
Mounting NFS volumes locally on the exporting server is not supported on openSUSE Leap.
Configuring an NFS server can be done either through YaST or manually. For authentication, NFS can also be combined with Kerberos.
With YaST, turn a host in your network into an NFS server—a server that exports directories and files to all hosts granted access to it or to all members of a group. Thus, the server can also provide applications without installing the applications locally on every host.
To set up such a server, proceed as follows:
Start YaST and select › ; see Figure 22.1, “NFS Server Configuration Tool”. You may be prompted to install additional software.
Activate the radio button.
If a firewall is active on your system (SuSEfirewall2), check
. YaST adapts its
configuration for the NFS server by enabling the
nfs service.
Check whether you want to . If you deactivate NFSv4, YaST will only support NFSv3. For information about enabling NFSv2, see Note: NFSv2.
If NFSv4 is selected, additionally enter the appropriate NFSv4 domain
name. This parameter is used by the idmapd daemon that is required for Kerberos
setups or if clients cannot work with numeric user names. Leave it as
localdomain (the default) if you do not run
idmapd or do not have any
special requirements. For more information on the idmapd daemon see /etc/idmapd.conf.
Click if you need secure access to the server. A prerequisite for this is to have Kerberos installed on your domain and to have both the server and the clients kerberized. Click to proceed with the next configuration dialog.
Click in the upper half of the dialog to export your directory.
If you have not configured the allowed hosts already, another dialog for entering the client information and options pops up automatically. Enter the host wild card (usually you can leave the default settings as they are).
There are four possible types of host wild cards that can be set for each
host: a single host (name or IP address), netgroups, wild cards (such as
* indicating all machines can access the server), and
IP networks.
For more information about these options, see the
exports man page.
Click to complete the configuration.
The configuration files for the NFS export service are
/etc/exports and
/etc/sysconfig/nfs. In addition to these files,
/etc/idmapd.conf is needed for the NFSv4 server
configuration with kerberized NFS or if the clients cannot work with
numeric user names.
To start or restart the services, run the command
systemctl restart nfsserver. This also restarts the
RPC portmapper that is required by the NFS server.
To make sure the NFS server always starts at boot time, run sudo
systemctl enable nfsserver.
NFSv4 is the latest version of NFS protocol available on openSUSE Leap. Configuring directories for export with NFSv4 is now the same as with NFSv3.
On openSUSE
prior to Leap, the bind mount in
/etc/exports was mandatory. It is still supported,
but now deprecated.
/etc/exports
The /etc/exports file contains a list of entries.
Each entry indicates a directory that is shared and how it is shared. A
typical entry in /etc/exports consists of:
/SHARED/DIRECTORY HOST(OPTION_LIST)
For example:
/export/data 192.168.1.2(rw,sync)
Here the IP address 192.168.1.2 is used to identify
the allowed client. You can also use the name of the host, a wild card
indicating a set of hosts (*.abc.com,
*, etc.), or netgroups
(@my-hosts).
For a detailed explanation of all options and their meaning, refer to
the man page of /etc/exports (man
exports).
In case you have modified /etc/exports while the
NFS server was running, you need to restart it for the changes to become
active: sudo systemctl restart nfsserver.
/etc/sysconfig/nfs
The /etc/sysconfig/nfs file contains a few
parameters that determine NFSv4 server daemon behavior. It is important
to set the parameter NFS4_SUPPORT to
yes (default). NFS4_SUPPORT
determines whether the NFS server supports NFSv4 exports and clients.
In case you have modified /etc/sysconfig/nfs while
the NFS server was running, you need to restart it for the changes to
become active: sudo systemctl restart nfsserver.
On openSUSE prior to Leap, the
--bind mount in /etc/exports was
mandatory. It is still supported, but now deprecated. Configuring
directories for export with NFSv4 is now the same as with NFSv3.
If NFS clients still depend on NFSv2, enable it on the server in
/etc/sysconfig/nfs by setting:
NFSD_OPTIONS="-V2" MOUNTD_OPTIONS="-V2"
After restarting the service, check whether version 2 is available with the command:
tux > cat /proc/fs/nfsd/versions
+2 +3 +4 +4.1 -4.2/etc/idmapd.conf
Starting with SLE 12 SP1, the idmapd daemon is only required if Kerberos
authentication is used, or if clients cannot work with numeric user
names. Linux clients can work with numeric user names since Linux kernel
2.6.39. The idmapd daemon does
the name-to-ID mapping for NFSv4 requests to the server and replies to
the client.
If required, idmapd
needs to run on the NFSv4 server. Name-to-ID mapping on the client will
be done by nfsidmap provided by the package
nfs-client.
Make sure that there is a uniform way in which user names and IDs (uid) are assigned to users across machines that might probably be sharing file systems using NFS. This can be achieved by using NIS, LDAP, or any uniform domain authentication mechanism in your domain.
The parameter Domain must be set the same for both,
client and server in the /etc/idmapd.conf file. If
you are not sure, leave the domain as localdomain in
the server and client files. A sample configuration file looks like the
following:
[General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = localdomain [Mapping] Nobody-User = nobody Nobody-Group = nobody
To start the idmapd daemon, run
systemctl start nfs-idmapd. In case you have modified
/etc/idmapd.conf while the daemon was running, you
need to restart it for the changes to become active: systemctl
start nfs-idmapd.
For more information, see the man pages of idmapd and
idmapd.conf (man idmapd and
man idmapd.conf).
To use Kerberos authentication for NFS, Generic Security Services (GSS) must be enabled. Select in the initial YaST NFS Server dialog. You must have a working Kerberos server to use this feature. YaST does not set up the server but only uses the provided functionality. If you want to use Kerberos authentication in addition to the YaST configuration, complete at least the following steps before running the NFS configuration:
Make sure that both the server and the client are in the same Kerberos
domain. They must access the same KDC (Key Distribution Center) server
and share their krb5.keytab file (the default
location on any machine is /etc/krb5.keytab). For
more information about Kerberos, see
Chapter 6, Network Authentication with Kerberos.
Start the gssd service on the client with systemctl start
rpc-gssd.service.
Start the svcgssd service on the server with systemctl start
rpc-svcgssd.service.
Kerberos authentication also requires the idmapd daemon to run on the server. For more
information refer to /etc/idmapd.conf.
For more information about configuring kerberized NFS, refer to the links in Section 22.5, “For More Information”.
To configure your host as an NFS client, you do not need to install additional software. All needed packages are installed by default.
Authorized users can mount NFS directories from an NFS server into the local file tree using the YaST NFS client module. Proceed as follows:
Start the YaST NFS client module.
Click in the tab. Enter the host name of the NFS server, the directory to import, and the mount point at which to mount this directory locally.
When using NFSv4, select in the
tab. Additionally, the must contain the same value as used by the NFSv4
server. The default domain is localdomain.
To use Kerberos authentication for NFS, GSS security must be enabled. Select .
Enable in the tab if you use a Firewall and want to allow access to the service from remote computers. The firewall status is displayed next to the check box.
Click to save your changes.
The configuration is written to /etc/fstab and the
specified file systems are mounted. When you start the YaST configuration
client at a later time, it also reads the existing configuration from this
file.
On (diskless) systems, where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.
When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 13.4.1.2.5, “Activating the Network Device” and choose in the pane.
The prerequisite for importing file systems manually from an NFS server is
a running RPC port mapper. The nfs service takes care to
start it properly; thus, start it by entering systemctl start
nfs as root. Then
remote file systems can be mounted in the file system like local partitions
using mount:
tux >sudomount HOST:REMOTE-PATHLOCAL-PATH
To import user directories from the nfs.example.com
machine, for example, use:
tux >sudomount nfs.example.com:/home /home
The autofs daemon can be used to mount remote file systems automatically.
Add the following entry to the /etc/auto.master file:
/nfsmounts /etc/auto.nfs
Now the /nfsmounts directory acts as the root for all
the NFS mounts on the client if the auto.nfs file is
filled appropriately. The name auto.nfs is chosen for
the sake of convenience—you can choose any name. In
auto.nfs add entries for all the NFS mounts as
follows:
localdata -fstype=nfs server1:/data nfs4mount -fstype=nfs4 server2:/
Activate the settings with systemctl start autofs as
root. In this example, /nfsmounts/localdata,
the /data directory of
server1, is mounted with NFS and
/nfsmounts/nfs4mount from
server2 is mounted with NFSv4.
If the /etc/auto.master file is edited while the
service autofs is running, the automounter must be restarted for the
changes to take effect with systemctl restart autofs.
/etc/fstab #
A typical NFSv3 mount entry in /etc/fstab looks like
this:
nfs.example.com:/data /local/path nfs rw,noauto 0 0
For NFSv4 mounts, use nfs4 instead of
nfs in the third column:
nfs.example.com:/data /local/pathv4 nfs4 rw,noauto 0 0
The noauto option prevents the file system from being
mounted automatically at start-up. If you want to mount the respective
file system manually, it is possible to shorten the mount command
specifying the mount point only:
tux >sudomount /local/path
If you do not enter the noauto option, the init
scripts of the system will handle the mount of those file systems at
start-up.
NFS is one of the oldest protocols, developed in the '80s. As such, NFS is usually sufficient if you want to share small files. However, when you want to transfer big files or large numbers of clients want to access data, an NFS server becomes a bottleneck and has a significant impact on the system performance. This is because of files quickly getting bigger, whereas the relative speed of your Ethernet has not fully kept up.
When you request a file from a regular NFS server, the server looks up the file metadata, collects all the data and transfers it over the network to your client. However, the performance bottleneck becomes apparent no matter how small or big the files are:
With small files most of the time is spent collecting the metadata.
With big files most of the time is spent on transferring the data from server to client.
pNFS, or parallel NFS, overcomes this limitation as it separates the file system metadata from the location of the data. As such, pNFS requires two types of servers:
A metadata or control server that handles all the non-data traffic
One or more storage server(s) that hold(s) the data
The metadata and the storage servers form a single, logical NFS server. When a client wants to read or write, the metadata server tells the NFSv4 client which storage server to use to access the file chunks. The client can access the data directly on the server.
openSUSE Leap supports pNFS on the client side only.
Proceed as described in Procedure 22.2, “Importing NFS Directories”, but click
the check box and optionally . YaST will do all the necessary steps and will write all
the required options in the file /etc/exports.
Refer to Section 22.4.2, “Importing File Systems Manually” to start. Most of the
configuration is done by the NFSv4 server. For pNFS, the only difference
is to add the minorversion option and the metadata server
MDS_SERVER to your mount
command:
tux >sudomount -t nfs4 -o minorversion=1 MDS_SERVER MOUNTPOINT
To help with debugging, change the value in the /proc
file system:
tux >sudoecho 32767 > /proc/sys/sunrpc/nfsd_debugtux >sudoecho 32767 > /proc/sys/sunrpc/nfs_debug
In addition to the man pages of exports,
nfs, and mount, information about
configuring an NFS server and client is available in
/usr/share/doc/packages/nfsidmap/README. For further
documentation online refer to the following Web sites:
Find the detailed technical documentation online at SourceForge.
For instructions for setting up kerberized NFS, refer to NFS Version 4 Open Source Reference Implementation.
If you have questions on NFSv4, refer to the Linux NFSv4 FAQ.
autofs is a program that automatically mounts
specified directories on an on-demand basis. It is based on a kernel module
for high efficiency, and can manage both local directories and network
shares. These automatic mount points are mounted only when they are
accessed, and unmounted after a certain period of inactivity. This
on-demand behavior saves bandwidth and results in better performance than
static mounts managed by /etc/fstab. While
autofs is a control script,
automount is the command (daemon) that does the actual
auto-mounting.
autofs is not installed on openSUSE Leap by
default. To use its auto-mounting capabilities, first install it with
tux >sudosudo zypper install autofs
You need to configure autofs manually by editing
its configuration files with a text editor, such as vim.
There are two basic steps to configure
autofs—the master map
file, and specific map files.
The default master configuration file for autofs
is /etc/auto.master. You can change its location by
changing the value of the DEFAULT_MASTER_MAP_NAME option
in /etc/sysconfig/autofs. Here is the content of the
default one for openSUSE Leap:
# # Sample auto.master file # This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # For details of the format look at autofs(5).1 # #/misc /etc/auto.misc2 #/net -hosts # # Include /etc/auto.master.d/*.autofs3 # #+dir:/etc/auto.master.d # # Include central master map if it can be found using # nsswitch sources. # # Note that if there are entries for /net or /misc (as # above) in the included master map any keys that are the # same will not be seen as the first read key seen takes # precedence. # +auto.master4
The | |
Although commented out (#) by default, this is an example of a simple automounter mapping syntax. | |
In case you need to split the master map into several files, uncomment
the line, and put the mappings (suffixed with | |
|
Entries in auto.master have three fields with the
following syntax:
mount point map name options
The base location where to mount the autofs
file system, such as /home.
The name of a map source to use for mounting. For the syntax of the maps files, see Section 23.2.2, “Map Files”.
These options (if specified) will apply as defaults to all entries in the given map.
For more detailed information on the specific values of the optional
map-type, format, and
options, see the manual
page (man 5 auto.master).
The following entry in auto.master tells
autofs to look in
/etc/auto.smb, and create mount points in the
/smb directory.
/smb /etc/auto.smb
Direct mounts create a mount point at the path specified inside the
relevant map file. Instead of specifying the mount point in
auto.master, replace the mount point field with
/-. For example, the following line tells
autofs to create a mount point at the place
specified in auto.smb:
/- /etc/auto.smb
If the map file is not specified with its full local or network path, it is located using the Name Service Switch (NSS) configuration:
/- auto.smb
Although files are the most common types of maps for
auto-mounting with autofs, there are other types
as well. A map specification can be the output of a command, or a result
of a query in LDAP or database. For more detailed information on map
types, see the manual page man 5 auto.master.
Map files specify the (local or network) source location, and the mount point where to mount the source locally. The general format of maps is similar to the master map. The difference is that the options appear between the mount point and the location instead of at the end of the entry:
mount point options location
Specifies where to mount the source location. This can be either a
single directory name (so-called indirect mount) to
be added to the base mount point specified in
auto.master, or the full path of the mount point
(direct mount, see Section 23.2.1.1, “Direct Mounts”).
Specifies optional comma-separated list of mount options for the
relevant entries. If auto.master contains options
for this map file as well, theses are appended.
Specifies from where the file system is to be mounted. It is usually an
NFS or SMB volume in the usual notation
host_name:path_name. If the file system to be mounted
begins with a '/' (such as local /dev entries or
smbfs shares), a colon symbol ':' needs to be prefixed, such as
:/dev/sda1.
This section introduces information on how to control the
autofs service operation, and how to view more
debugging information when tuning the automounter operation.
autofs Service #
The operation of the autofs service is controlled
by systemd. The general syntax of the systemctl
command for autofs is
tux >sudosystemctl SUB_COMMAND autofs
where SUB_COMMAND is one of:
Starts the automounter daemon at boot.
Starts the automounter daemon.
Stops the automounter daemon. Automatic mount points are not accessible.
Prints the current status of the autofs service
together with a part of a relevant log file.
Stops and starts the automounter, terminating all running daemons and starting new ones.
Checks the current auto.master map, restarts those
daemons whose entries have changed, and starts new ones for new entries.
If you experience problems when mounting directories with
autofs, it is useful to run the
automount daemon manually and watch its output messages:
Stop autofs.
tux >sudosystemctl stop autofs
From one terminal, run automount manually in the
foreground, producing verbose output.
tux >sudoautomount -f -v
From another terminal, try to mount the auto-mounting file systems by
accessing the mount points (for example by cd or
ls).
Check the output of automount from the first terminal
for more information why the mount failed, or why it was not even
attempted.
The following procedure illustrates how to configure
autofs to auto-mount an NFS share available on your
network. It makes use of the information mentioned above, and assumes you
are familiar with NFS exports. For more information on NFS, see
Chapter 22, Sharing File Systems with NFS.
Edit the master map file /etc/auto.master:
tux >sudovim /etc/auto.master
Add a new entry for the new NFS mount at the end of
/etc/auto.master:
/nfs /etc/auto.nfs --timeout=10
It tells autofs that the base mount point is
/nfs, the NFS shares are specified in the
/etc/auto.nfs map, and that all shares in this map
will be automatically unmounted after 10 seconds of inactivity.
Create a new map file for NFS shares:
tux >sudovim /etc/auto.nfs
/etc/auto.nfs normally contains a separate line for
each NFS share. Its format is described in
Section 23.2.2, “Map Files”. Add the line describing the mount point
and the NFS share network address:
export jupiter.com:/home/geeko/doc/export
The above line means that the /home/geeko/doc/export
directory on the jupiter.com host will be auto-mounted
to the /nfs/export directory on the local host
(/nfs is taken from the
auto.master map) when requested. The
/nfs/export directory will be created automatically
by autofs.
Optionally comment out the related line in /etc/fstab
if you previously mounted the same NFS share statically. The line should
look similar to this:
#jupiter.com:/home/geeko/doc/export /nfs/export nfs defaults 0 0
Reload autofs and check if it works:
tux >sudosystemctl restart autofs
# ls -l /nfs/export total 20 drwxr-xr-x 6 1001 users 4096 Oct 25 08:56 ./ drwxr-xr-x 3 root root 0 Apr 1 09:47 ../ drwxr-xr-x 5 1001 users 4096 Jan 14 2013 .images/ drwxr-xr-x 10 1001 users 4096 Aug 16 2013 .profiled/ drwxr-xr-x 3 1001 users 4096 Aug 30 2013 .tmp/ drwxr-xr-x 4 1001 users 4096 Oct 25 08:56 SLE-12-manual/
If you can see the list of files on the remote share, then
autofs is functioning.
This section describes topics that are beyond the basic introduction to
autofs—auto-mounting of NFS shares that are
available on your network, using wild cards in map files, and information
specific to the CIFS file system.
/net Mount Point #
This helper mount point is useful if you use a lot of NFS shares.
/net auto-mounts all NFS shares on your local network
on demand. The entry is already present in the
auto.master file, so all you need to do is uncomment
it and restart autofs:
/net -hosts
tux >sudosystemctl restart autofs
For example, if you have a server named jupiter with an
NFS share called /export, you can mount it by typing
tux >sudocd /net/jupiter/export
on the command line.
If you have a directory with subdirectories that you need to auto-mount
individually—the typical case is the /home
directory with individual users' home directories inside—
autofs offers a clever solution for that.
In case of home directories, add the following line in
auto.master:
/home /etc/auto.home
Now you need to add the correct mapping to the
/etc/auto.home file, so that the users' home
directories are mounted automatically. One solution is to create separate
entries for each directory:
wilber jupiter.com:/home/wilber penguin jupiter.com:/home/penguin tux jupiter.com:/home/tux [...]
This is very awkward as you need to manage the list of users inside
auto.home. You can use the asterisk '*' instead of the
mount point, and the ampersand '&' instead of the directory to be
mounted:
* jupiter:/home/&
If you want to auto-mount an SMB/CIFS share (see
Chapter 21, Samba for more information on the SMB/CIFS protocol),
you need to modify the syntax of the map file. Add
-fstype=cifs in the option field, and prefix the share
location with a colon ':'.
mount point -fstype=cifs ://jupiter.com/export
According to the survey from http://www.netcraft.com/, the Apache HTTP Server (Apache) is the world's most widely-used Web server. Developed by the Apache Software Foundation (http://www.apache.org/), it is available for most operating systems. openSUSE® Leap includes Apache version 2.4. In this chapter, learn how to install, configure and set up a Web server; how to use SSL, CGI, and additional modules; and how to troubleshoot Apache.
With this section, quickly set up and start Apache. You must be root
to install and configure Apache.
Make sure the following requirements are met before trying to set up the Apache Web server:
The machine's network is configured properly. For more information about this topic, refer to Chapter 13, Basic Networking.
The machine's exact system time is maintained by synchronizing with a time server. This is necessary because parts of the HTTP protocol depend on the correct time. See Chapter 18, Time Synchronization with NTP to learn more about this topic.
The latest security updates are installed. If in doubt, run a YaST Online Update.
The default Web server port (80) is opened in the
firewall. For this, configure firewalld to allow the service
http in the public zone.
See Section 15.4.1, “Configuring the Firewall on the Command Line” for details.
Apache on openSUSE Leap is not installed by default. To install it with a standard, predefined configuration that runs “out of the box”, proceed as follows:
Start YaST and select › .
Choose › and select .
Confirm the installation of the dependent packages to finish the installation process.
You can start Apache automatically at boot time or start it manually.
To make sure that Apache is automatically started during boot in the
targets multi-user.target and
graphical.target, execute the following command:
tux >sudosystemctl enable apache2
For more information about the systemd targets in openSUSE Leap and a description of the YaST , refer to Section 10.4, “Managing Services with YaST”.
To manually start Apache using the shell, run systemctl start
apache2.
If you do not receive error messages when starting Apache, this usually indicates that the Web server is running. To test this:
Start a browser and open http://localhost/.
If Apache is up and running, you get a test page stating “It works!”.
If you do not see this page, refer to Section 24.9, “Troubleshooting”.
Now that the Web server is running, you can add your own documents, adjust the configuration according to your needs, or add functionality by installing modules.
openSUSE Leap offers two configuration options:
Manual configuration offers a higher level of detail, but lacks the convenience of the YaST GUI.
Most configuration changes require a reload (some also a restart) of Apache
to take effect. Manually reload Apache with systemctl reload
apache2 or use one of the restart options as described in
Section 24.3, “Starting and Stopping Apache”.
If you configure Apache with YaST, this can be taken care of automatically if you set to as described in Section 24.2.3.2, “HTTP Server Configuration”.
This section gives an overview of the Apache configuration files. If you use YaST for configuration, you do not need to touch these files—however, the information might be useful for you if you want to switch to manual configuration later on.
Apache configuration files can be found in two different locations:
/etc/sysconfig/apache2 #
/etc/sysconfig/apache2 controls some global settings
of Apache, like modules to load, additional configuration files to
include, flags with which the server should be started, and flags that
should be added to the command line. Every configuration option in this
file is extensively documented and therefore not mentioned here. For a
general-purpose Web server, the settings in
/etc/sysconfig/apache2 should be sufficient for any
configuration needs.
/etc/apache2/ #
/etc/apache2/ hosts all configuration files for
Apache. In the following, the purpose of each file is explained. Each file
includes several configuration options (also called
directives). Every configuration option in these
files is extensively documented and therefore not mentioned here.
The Apache configuration files are organized as follows:
/etc/apache2/
|
|- charset.conv
|- conf.d/
| |
| |- *.conf
|
|- default-server.conf
|- errors.conf
|- httpd.conf
|- listen.conf
|- magic
|- mime.types
|- mod_*.conf
|- server-tuning.conf
|- ssl.*
|- ssl-global.conf
|- sysconfig.d
| |
| |- global.conf
| |- include.conf
| |- loadmodule.conf . .
|
|- uid.conf
|- vhosts.d
| |- *.confcharset.conv
Specifies which character sets to use for different languages. Do not edit this file.
conf.d/*.conf
Configuration files added by other modules. These configuration files
can be included into your virtual host configuration where needed. See
vhosts.d/vhost.template for examples. By doing so,
you can provide different module sets for different virtual hosts.
default-server.conf
Global configuration for all virtual hosts with reasonable defaults. Instead of changing the values, overwrite them with a virtual host configuration.
errors.conf
Defines how Apache responds to errors. To customize these messages for all virtual hosts, edit this file. Otherwise overwrite these directives in your virtual host configurations.
httpd.conf
The main Apache server configuration file. Avoid changing this file. It primarily contains include statements and global settings. Overwrite global settings in the pertinent configuration files listed here. Change host-specific settings (such as document root) in your virtual host configuration.
listen.conf
Binds Apache to specific IP addresses and ports. Name-based virtual hosting is also configured here. For details, see Section 24.2.2.1.1, “Name-Based Virtual Hosts”.
magic
Data for the mime_magic module that helps Apache automatically determine the MIME type of an unknown file. Do not change this file.
mime.types
MIME types known by the system (this actually is a link to
/etc/mime.types). Do not edit this file. If you
need to add MIME types not listed here, add them to
mod_mime-defaults.conf.
mod_*.conf
Configuration files for the modules that are installed by default.
Refer to Section 24.4, “Installing, Activating, and Configuring Modules” for details. Note that
configuration files for optional modules reside in the directory
conf.d.
server-tuning.conf
Contains configuration directives for the different MPMs (see Section 24.4.4, “Multiprocessing Modules”) and general configuration options that control Apache's performance. Properly test your Web server when making changes here.
ssl-global.conf and ssl.*
Global SSL configuration and SSL certificate data. Refer to Section 24.6, “Setting Up a Secure Web Server with SSL” for details.
sysconfig.d/*.conf
Configuration files automatically generated from
/etc/sysconfig/apache2. Do not change any of these
files—edit /etc/sysconfig/apache2 instead.
Do not put other configuration files in this directory.
uid.conf
Specifies under which user and group ID Apache runs. Do not change this file.
vhosts.d/*.conf
Your virtual host configuration should be located here. The directory
contains template files for virtual hosts with and without SSL. Every
file in this directory ending with .conf is
automatically included in the Apache configuration. Refer to
Section 24.2.2.1, “Virtual Host Configuration” for details.
Configuring Apache manually involves editing plain text configuration files
as user root.
The term virtual host refers to Apache's ability to serve multiple universal resource identifiers (URIs) from the same physical machine. This means that several domains, such as www.example.com and www.example.net, are run by a single Web server on one physical machine.
It is common practice to use virtual hosts to save administrative effort (only a single Web server needs to be maintained) and hardware expenses (each domain does not require a dedicated server). Virtual hosts can be name based, IP based, or port based.
To list all existing virtual hosts, use the command
apache2ctl -S. This outputs a list
showing the default server and all virtual hosts together with their IP
addresses and listening ports. Furthermore, the list also contains an
entry for each virtual host showing its location in the configuration
files.
Virtual hosts can be configured via YaST as described in
Section 24.2.3.1.4, “Virtual Hosts” or
by manually editing a configuration file. By default, Apache in
openSUSE Leap is prepared for one configuration file per virtual host in
/etc/apache2/vhosts.d/. All files in this directory
with the extension .conf are automatically included
to the configuration. A basic template for a virtual host is provided in
this directory (vhost.template or
vhost-ssl.template for a virtual host with SSL
support).
It is recommended to always create a virtual host configuration file, even if your Web server only hosts one domain. By doing so, you not only have the domain-specific configuration in one file, but you can always fall back to a working basic configuration by simply moving, deleting, or renaming the configuration file for the virtual host. For the same reason, you should also create separate configuration files for each virtual host.
When using name-based virtual hosts it is recommended to set up a default
configuration that will be used when a domain name does not match a
virtual host configuration. The default virtual host is the one whose
configuration is loaded first. Since the order of the configuration files
is determined by file name, start the file name of the default virtual
host configuration with an underscore character (_) to
make sure it is loaded first (for example:
_default_vhost.conf).
The
<VirtualHost></VirtualHost>
block holds the information that applies to a particular domain. When
Apache receives a client request for a defined virtual host, it uses the
directives enclosed in this section. Almost all directives can be used in
a virtual host context. See
http://httpd.apache.org/docs/2.4/mod/quickreference.html
for further information about Apache's configuration directives.
With name-based virtual hosts, more than one Web site is served per IP
address. Apache uses the host field in the HTTP header that is sent by
the client to connect the request to a matching
ServerName entry of one of the virtual host
declarations. If no matching ServerName is
found, the first specified virtual host is used as a default.
The first step is to create a <VirtualHost>
block for each different name-based host that you want to serve. Inside
each <VirtualHost> block, you will need at
minimum a ServerName directive to designate which host
is served and a DocumentRoot directive to show where
in the file system the content for that host resides.
VirtualHost Entries #<VirtualHost *:80> # This first-listed virtual host is also the default for *:80 ServerName www.example.com ServerAlias example.com DocumentRoot /srv/www/htdocs/domain </VirtualHost> <VirtualHost *:80> ServerName other.example.com DocumentRoot /srv/www/htdocs/otherdomain </VirtualHost>
The opening VirtualHost tag takes the IP address
(or fully qualified domain name) as an argument in a name-based virtual
host configuration. A port number directive is optional.
The wild card * is also allowed as a substitute for the IP address. When using IPv6 addresses, the address must be included in square brackets.
VirtualHost Directives #<VirtualHost 192.168.3.100:80> ... </VirtualHost> <VirtualHost 192.168.3.100> ... </VirtualHost> <VirtualHost *:80> ... </VirtualHost> <VirtualHost *> ... </VirtualHost> <VirtualHost [2002:c0a8:364::]> ... </VirtualHost>
This alternative virtual host configuration requires the setup of multiple IPs for a machine. One instance of Apache hosts several domains, each of which is assigned a different IP.
The physical server must have one IP address for each IP-based virtual host. If the machine does not have multiple network cards, virtual network interfaces (IP aliasing) can also be used.
The following example shows Apache running on a machine with the IP
192.168.3.100, hosting two domains
on the additional IPs 192.168.3.101
and 192.168.3.102. A separate
VirtualHost block is needed for every virtual
server.
VirtualHost Directives #<VirtualHost 192.168.3.101> ... </VirtualHost> <VirtualHost 192.168.3.102> ... </VirtualHost>
Here, VirtualHost directives are only specified
for interfaces other than 192.168.3.100. When a
Listen directive is also configured for
192.168.3.100, a separate IP-based virtual host must
be created to answer HTTP requests to that interface—otherwise the
directives found in the default server configuration
(/etc/apache2/default-server.conf) are applied.
At least the following directives should be in each virtual host
configuration to set up a virtual host. See
/etc/apache2/vhosts.d/vhost.template for more
options.
ServerName
The fully qualified domain name under which the host should be addressed.
DocumentRoot
Path to the directory from which Apache should serve files for this
host. For security reasons, access to the entire file system is
forbidden by default, so you must explicitly unlock this directory
within a Directory container.
ServerAdmin
E-mail address of the server administrator. This address is, for example, shown on error pages Apache creates.
ErrorLog
The error log file for this virtual host. Although it is not necessary
to create separate error log files for each virtual host, it is common
practice to do so, because it makes the debugging of errors much
easier. /var/log/apache2/ is the default
directory for Apache's log files.
CustomLog
The access log file for this virtual host. Although it is not
necessary to create separate access log files for each virtual host,
it is common practice to do so, because it allows the separate
analysis of access statistics for each host.
/var/log/apache2/ is the default directory for
Apache's log files.
As mentioned above, access to the whole file system is forbidden by
default for security reasons. Therefore, explicitly unlock the
directories in which you have placed the files Apache should
serve—for example the DocumentRoot:
<Directory "/srv/www/www.example.com/htdocs"> Require all granted </Directory>
Require all granted
In previous versions of Apache, the statement Require all
granted was expressed as:
Order allow,deny Allow from all
This old syntax is still supported by the
mod_access_compat module.
The complete configuration file looks like this:
VirtualHost Configuration #<VirtualHost 192.168.3.100> ServerName www.example.com DocumentRoot /srv/www/www.example.com/htdocs ServerAdmin webmaster@example.com ErrorLog /var/log/apache2/www.example.com_log CustomLog /var/log/apache2/www.example.com-access_log common <Directory "/srv/www/www.example.com/htdocs"> Require all granted </Directory> </VirtualHost>
To configure your Web server with YaST, start YaST and select › . When starting the module for the first time, the starts, prompting you to make a few basic decisions concerning administration of the server. After having finished the wizard, the dialog starts each time you call the module. For more information, see Section 24.2.3.2, “HTTP Server Configuration”.
The HTTP Server Wizard consists of five steps. In the last step of the dialog, you may enter the expert configuration mode to make even more specific settings.
Here, specify the network interfaces and ports Apache uses to listen for
incoming requests. You can select any combination of existing network
interfaces and their respective IP addresses. Ports from all three ranges
(well-known ports, registered ports, and dynamic or private ports) that are
not reserved by other services can be used. The default setting is to
listen on all network interfaces (IP addresses) on port
80.
Check to open the ports in the firewall that the Web server listens on. This is necessary to make the Web server available on the network, which can be a LAN, WAN, or the public Internet. Keeping the port closed is only useful in test situations where no external access to the Web server is necessary. If you have multiple network interfaces, click to specify on which interface(s) the port(s) should be opened.
Click to continue with the configuration.
The configuration option allows for the activation or deactivation of the script languages that the Web server should support. For the activation or deactivation of other modules, refer to Section 24.2.3.2.2, “Server Modules”. Click to advance to the next dialog.
This option pertains to the default Web server. As explained in Section 24.2.2.1, “Virtual Host Configuration”, Apache can serve multiple virtual hosts from a single physical machine. The first declared virtual host in the configuration file is commonly called the default host. Each virtual host inherits the default host's configuration.
To edit the host settings (also called directives), select the appropriate entry in the table then click . To add new directives, click . To delete a directive, select it and click .
Here is list of the default settings of the server:
Document Root
Path to the directory from which Apache serves files for this host.
/srv/www/htdocs is the default location.
Alias
Using Alias directives, URLs can be
mapped to physical file system locations. This means that a certain path
even outside the Document Root in the file system can
be accessed via a URL aliasing that path.
The default openSUSE Leap Alias
/icons points to
/usr/share/apache2/icons for the Apache icons
displayed in the directory index view.
ScriptAlias
Similar to the Alias directive, the
ScriptAlias directive maps a URL to a file
system location. The difference is that
ScriptAlias designates the target directory as
a CGI location, meaning that CGI scripts should be executed in that
location.
Directory
With Directory settings, you can enclose a
group of configuration options that will only apply to the specified
directory.
Access and display options for the directories
/srv/www/htdocs,
/usr/share/apache2/icons and
/srv/www/cgi-bin are configured here. It should not
be necessary to change the defaults.
Include
With include, additional configuration files can be specified. Two
Include directives are already preconfigured:
/etc/apache2/conf.d/ is the directory containing
the configuration files that come with external modules. With this
directive, all files in this directory ending in
.conf are included. With the second directive,
/etc/apache2/conf.d/apache2-manual.conf, the
apache2-manual configuration file is included.
Server Name
This specifies the default URL used by clients to contact the Web
server. Use a fully qualified domain name (FQDN) to reach the Web server
at http://FQDN/ or its IP
address. You cannot choose an arbitrary name here—the server must
be “known” under this name.
Server Administrator E-Mail
E-mail address of the server administrator. This address is, for example, shown on error pages Apache creates.
After finishing with the step, click to continue with the configuration.
In this step, the wizard displays a list of already configured virtual hosts (see Section 24.2.2.1, “Virtual Host Configuration”). If you have not made manual changes prior to starting the YaST HTTP wizard, no virtual host is present.
To add a host, click to open a dialog in which to
enter basic information about the host, such as ,
(DocumentRoot), and the . is used to determine
how a host is identified (name based or IP based). Specify the name or IP
address with
Clicking advances to the second part of the virtual host configuration dialog.
In part two of the virtual host configuration you can specify whether to
enable CGI scripts and which directory to use for these scripts. It is also
possible to enable SSL. If you do so, you must specify the path to the
certificate as well. See Section 24.6.2, “Configuring Apache with SSL”
for details on SSL and certificates. With the option, you can specify which file to display when the
client requests a directory (by default, index.html).
Add one or more file names (space-separated) to change this.
With , the content of the users public
directories
(~USER/public_html/) is
made available on the server under
http://www.example.com/~USER.
It is not possible to add virtual hosts at will. If using name-based virtual hosts, each host name must be resolved on the network. If using IP-based virtual hosts, you can assign only one host to each IP address available.
This is the final step of the wizard. Here, determine how and when the Apache server is started: when booting or manually. Also see a short summary of the configuration made so far. If you are satisfied with your settings, click to complete configuration. To change something, click until you have reached the desired dialog. Clicking opens the dialog described in Section 24.2.3.2, “HTTP Server Configuration”.
The dialog also lets you make even more adjustments to the configuration than the wizard (which only runs if you configure your Web server for the first time). It consists of four tabs described in the following. No configuration option you change here is effective immediately—you always must confirm your changes with to make them effective. Clicking leaves the configuration module and discards your changes.
In , select whether Apache should be running
() or stopped (). In
, ,
, or addresses and ports
on which the server should be available. The default is to listen on all
interfaces on port 80. You should always check
, because otherwise the Web server
is not reachable from outside. Keeping the port closed is only useful in
test situations where no external access to the Web server is necessary. If
you have multiple network interfaces, click to specify on which interface(s) the port(s) should be
opened.
With , watch either the access log file or the error log file. This is useful if you want to test your configuration. The log file opens in a separate window from which you can also restart or reload the Web server. For details, see Section 24.3, “Starting and Stopping Apache”. These commands are effective immediately and their log messages are also displayed immediately.
You can change the status (enabled or disabled) of Apache2 modules by clicking . Click to add a new module that is already installed but not yet listed. Learn more about modules in Section 24.4, “Installing, Activating, and Configuring Modules”.
These dialogs are identical to the ones already described. Refer to Section 24.2.3.1.3, “Default Host” and Section 24.2.3.1.4, “Virtual Hosts”.
If configured with YaST as described in
Section 24.2.3, “Configuring Apache with YaST”, Apache is started at boot
time in the multi-user.target and
graphical.target. You can change this behavior
using YaST's or with the
systemctl command line tool (systemctl
enable or systemctl disable).
To start, stop, or manipulate Apache on a running system, use either the
systemctl or the apachectl commands as
described below.
For general information about systemctl commands, refer
to Section 10.2.1, “Managing Services in a Running System”.
systemctl status apache2
Checks if Apache is started.
systemctl start apache2
Starts Apache if it is not already running.
systemctl stop apache2
Stops Apache by terminating the parent process.
systemctl restart apache2
Stops and then restarts Apache. Starts the Web server if it was not running before.
systemctl try-restart apache2
Stops then restarts Apache only if it is already running.
systemctl reload apache2
Stops the Web server by advising all forked Apache processes to first finish their requests before shutting down. As each process dies, it is replaced by a newly started one, resulting in a complete “restart” of Apache.
This command allows activating changes in the Apache configuration without causing connection break-offs.
systemctl stop apache2
Stops the Web server after a defined period of time configured with
GracefulShutdownTimeout to ensure that existing
requests can be finished.
apachectl configtest
Checks the syntax of the configuration files without affecting a running Web server. Because this check is forced every time the server is started, reloaded, or restarted, it is usually not necessary to run the test explicitly (if a configuration error is found, the Web server is not started, reloaded, or restarted).
apachectl status and
apachectl fullstatus
Dumps a short or full status screen, respectively. Requires the module
mod_status to be enabled and a text-based
browser (such as links or
w3m) installed.
In addition to that,
status must be added to
APACHE_SERVER_FLAGS in the file
/etc/sysconfig/apache2.
If you specify additional flags to the commands, these are passed through to the Web server.
The Apache software is built in a modular fashion: all functionality except
some core tasks are handled by modules. This has progressed so far that even
HTTP is processed by a module (http_core).
Apache modules can be compiled into the Apache binary at build time or be dynamically loaded at runtime. Refer to Section 24.4.2, “Activation and Deactivation” for details of how to load modules dynamically.
Apache modules can be divided into four different categories:
Base modules are compiled into Apache by default. Apache in openSUSE Leap
has only mod_so (needed to load other modules)
and http_core compiled in. All others are
available as shared objects: rather than being included in the server
binary itself, they can be included at runtime.
In general, modules labeled as extensions are included in the Apache software package, but are usually not compiled into the server statically. In openSUSE Leap, they are available as shared objects that can be loaded into Apache at runtime.
Modules labeled external are not included in the official Apache distribution. However, openSUSE Leap provides several of them.
MPMs are responsible for accepting and handling requests to the Web server, representing the core of the Web server software.
If you have done a default installation as described in
Section 24.1.2, “Installation”, the following
modules are already installed: all base and extension modules, the
multiprocessing module Prefork MPM, and the external module
mod_python.
You can install additional external modules by starting YaST and choosing
› . Now choose
›
and search for apache. Among other packages, the
results list contains all available external Apache modules.
Activate or deactivate particular modules either manually or with YaST. In YaST, script language modules (PHP 5, Perl, and Python) need to be enabled or disabled with the module configuration described in Section 24.2.3.1, “HTTP Server Wizard”. All other modules can be enabled or disabled as described in Section 24.2.3.2.2, “Server Modules”.
If you prefer to activate or deactivate the modules manually, use the
commands a2enmod MODULE or
a2dismod MODULE,
respectively. a2enmod -l outputs a list of all currently
active modules.
If you have activated external modules manually, make sure to load their
configuration files in all virtual host configurations. Configuration
files for external modules are located under
/etc/apache2/conf.d/ and are loaded in
/etc/apache2/default-server.conf by default. For more
fine-grained control you can comment out the inclusion in
/etc/apache2/default-server.conf and add it to
specific virtual hosts only. See
/etc/apache2/vhosts.d/vhost.template for examples.
All base and extension modules are described in detail in the Apache documentation. Only a brief description of the most important modules is available here. Refer to http://httpd.apache.org/docs/2.4/mod/ to learn details about each module.
mod_actions
Provides methods to execute a script whenever a certain MIME type (such
as application/pdf), a file with a specific
extension (like .rpm), or a certain request method
(such as GET) is requested. This module is
enabled by default.
mod_alias
Provides Alias and
Redirect directives with which you can map a
URL to a specific directory (Alias) or redirect
a requested URL to another location. This module is enabled by default.
mod_auth*
The authentication modules provide different authentication methods:
basic authentication with mod_auth_basic or
digest authentication with mod_auth_digest.
mod_auth_basic and
mod_auth_digest must be combined with an
authentication provider module, mod_authn_*
(for example, mod_authn_file for text
file–based authentication) and with an authorization module
mod_authz_* (for example,
mod_authz_user for user authorization).
More information about this topic is available in the Authentication HOWTO at http://httpd.apache.org/docs/2.4/howto/auth.html.
mod_autoindex
Autoindex generates directory listings when no index file (for example,
index.html) is present. The look and feel of these
indexes is configurable. This module is enabled by default. However,
directory listings are disabled by default via the
Options directive—overwrite this setting
in your virtual host configuration. The default configuration file for
this module is located at
/etc/apache2/mod_autoindex-defaults.conf.
mod_cgi
mod_cgi is needed to execute CGI scripts. This
module is enabled by default.
mod_deflate
Using this module, Apache can be configured to compress given file types on the fly before delivering them.
mod_dir
mod_dir provides the
DirectoryIndex directive with which you can
configure which files are automatically delivered when a directory is
requested (index.html by default). It also provides
an automatic redirect to the correct URL when a directory request does
not contain a trailing slash. This module is enabled by default.
mod_env
Controls the environment that is passed to CGI scripts or SSI pages. Environment variables can be set or unset or passed from the shell that invoked the httpd process. This module is enabled by default.
mod_expires
With mod_expires, you can control how often
proxy and browser caches refresh your documents by sending an
Expires header. This module is enabled by
default.
mod_http2
With mod_http2, Apache gains support for the
HTTP/2 protocol. It can be enabled by specifying
Protocols h2 http/1.1 in a
VirtualHost.
mod_include
mod_include lets you use Server Side Includes
(SSI), which provide a basic functionality to generate HTML pages
dynamically. This module is enabled by default.
mod_info
Provides a comprehensive overview of the server configuration under
http://localhost/server-info/. For security reasons, you should always
limit access to this URL. By default only
localhost is allowed to
access this URL. mod_info is configured at
/etc/apache2/mod_info.conf.
mod_log_config
With this module, you can configure the look of the Apache log files. This module is enabled by default.
mod_mime
The mime module ensures that a file is delivered with the correct
MIME header based on the file name's extension (for example
text/html for HTML documents). This module is
enabled by default.
mod_negotiation
Necessary for content negotiation. See http://httpd.apache.org/docs/2.4/content-negotiation.html for more information. This module is enabled by default.
mod_rewrite
Provides the functionality of mod_alias, but
offers more features and flexibility. With
mod_rewrite, you can redirect URLs based on
multiple rules, request headers, and more.
mod_setenvif
Sets environment variables based on details of the client's request, such as the browser string the client sends, or the client's IP address. This module is enabled by default.
mod_spelling
mod_spelling attempts to automatically correct
typographical errors in URLs, such as capitalization errors.
mod_ssl
Enables encrypted connections between Web server and clients. See Section 24.6, “Setting Up a Secure Web Server with SSL” for details. This module is enabled by default.
mod_status
Provides information on server activity and performance under
http://localhost/server-status/. For security reasons, you should always
limit access to this URL. By default, only
localhost is allowed to
access this URL. mod_status is configured at
/etc/apache2/mod_status.conf.
mod_suexec
mod_suexec lets you run CGI scripts under a
different user and group. This module is enabled by default.
mod_userdir
Enables user-specific directories available under
~USER/. The
UserDir directive must be specified in the
configuration. This module is enabled by default.
openSUSE Leap provides two different multiprocessing modules (MPMs) for use with Apache:
The prefork MPM implements a non-threaded, preforking Web server. It makes the Web server behave similarly to Apache version 1.x. In this version it isolates each request and handles it by forking a separate child process. Thus problematic requests cannot affect others, avoiding a lockup of the Web server.
While providing stability with this process-based approach, the prefork MPM consumes more system resources than its counterpart, the worker MPM. The prefork MPM is considered the default MPM for Unix-based operating systems.
This document assumes Apache is used with the prefork MPM.
The worker MPM provides a multi-threaded Web server. A thread is a “lighter” form of a process. The advantage of a thread over a process is its lower resource consumption. Instead of only forking child processes, the worker MPM serves requests by using threads with server processes. The preforked child processes are multi-threaded. This approach makes Apache perform better by consuming fewer system resources than the prefork MPM.
One major disadvantage is the stability of the worker MPM: if a thread becomes corrupt, all threads of a process can be affected. In the worst case, this may result in a server crash. Especially when using the Common Gateway Interface (CGI) with Apache under heavy load, internal server errors might occur because of threads being unable to communicate with system resources. Another argument against using the worker MPM with Apache is that not all available Apache modules are thread-safe and thus cannot be used with the worker MPM.
Not all available PHP modules are thread-safe. Using the worker MPM with
mod_php is strongly discouraged.
Find a list of all external modules shipped with openSUSE Leap here. Find the module's documentation in the listed directory.
mod_apparmor
Adds support to Apache to provide AppArmor confinement to individual CGI
scripts handled by modules like mod_php5 and
mod_perl.
Package Name: apache2-mod_apparmor
|
| More Information: Part IV, “Confining Privileges with AppArmor” |
mod_perl
mod_perl enables you to run Perl scripts in an
embedded interpreter. The persistent interpreter embedded in the server
avoids the overhead of starting an external interpreter and the penalty
of Perl start-up time.
Package Name: apache2-mod_perl
|
Configuration File: /etc/apache2/conf.d/mod_perl.conf
|
More Information:
/usr/share/doc/packages/apache2-mod_perl
|
mod_php5
PHP is a server-side, cross-platform HTML embedded scripting language.
Package Name: apache2-mod_php5
|
Configuration File: /etc/apache2/conf.d/php5.conf
|
More Information:
/usr/share/doc/packages/apache2-mod_php5
|
mod_python
mod_python allows embedding Python within the
Apache HTTP server for a considerable boost in performance and added
flexibility in designing Web-based applications.
Package Name: apache2-mod_python
|
More Information:
/usr/share/doc/packages/apache2-mod_python
|
mod_security
mod_security provides a Web application
firewall to protect Web applications from a range of attacks. It also
enables HTTP traffic monitoring and real-time analysis.
Package Name: apache2-mod_security2
|
Configuration File: /etc/apache2/conf.d/mod_security2.conf
|
More Information: /usr/share/doc/packages/apache2-mod_security2
|
| Documentation: http://modsecurity.org/documentation/ |
Apache can be extended by advanced users by writing custom modules. To
develop modules for Apache or compile third-party modules, the package
apache2-devel is required along with the
corresponding development tools. apache2-devel
also contains the apxs2 tools, which are necessary for
compiling additional modules for Apache.
apxs2 enables the compilation and installation of
modules from source code (including the required changes to the
configuration files), which creates dynamic shared
objects (DSOs) that can be loaded into Apache at runtime.
The apxs2 binaries are located under
/usr/sbin:
/usr/sbin/apxs2—suitable for building an
extension module that works with any MPM. The installation location is
/usr/lib64/apache2.
/usr/sbin/apxs2-prefork—suitable for prefork
MPM modules. The installation location is
/usr/lib64/apache2-prefork.
/usr/sbin/apxs2-worker—suitable for worker MPM
modules. The installation location is
/usr/lib64/apache2-worker.
Install and activate a module from source code with the following commands:
tux >sudocd /path/to/module/sourcetux >sudoapxs2 -cia MODULE.c
where -c compiles the module, -i installs
it, and -a activates it. Other options of
apxs2 are described in the
apxs2(1) man page.
Apache's Common Gateway Interface (CGI) lets you create dynamic content with programs or scripts usually called CGI scripts. CGI scripts can be written in any programming language. Usually, script languages such as Perl or PHP are used.
To enable Apache to deliver content created by CGI scripts,
mod_cgi needs to be activated.
mod_alias is also needed. Both modules are enabled
by default. Refer to Section 24.4.2, “Activation and Deactivation” for
details on activating modules.
Allowing the server to execute CGI scripts is a potential security hole. Refer to Section 24.8, “Avoiding Security Problems” for additional information.
In openSUSE Leap, the execution of CGI scripts is only allowed in the
directory /srv/www/cgi-bin/. This location is already
configured to execute CGI scripts. If you have created a virtual host
configuration (see
Section 24.2.2.1, “Virtual Host Configuration”) and want to
place your scripts in a host-specific directory, you must unlock and
configure this directory.
ScriptAlias /cgi-bin/ "/srv/www/www.example.com/cgi-bin/"1 <Directory "/srv/www/www.example.com/cgi-bin/"> Options +ExecCGI2 AddHandler cgi-script .cgi .pl3 Require all granted4 </Directory>
Tells Apache to handle all files within this directory as CGI scripts. | |
Enables CGI script execution | |
Tells the server to treat files with the extensions .pl and .cgi as CGI scripts. Adjust according to your needs. | |
The |
CGI programming differs from "regular" programming in that the CGI programs
and scripts must be preceded by a MIME-Type header such as
Content-type: text/html. This header is sent to the
client, so it understands what kind of content it receives. Secondly, the
script's output must be something the client, usually a Web browser,
understands—HTML usually, or plain text or images, for example.
A simple test script available under
/usr/share/doc/packages/apache2/test-cgi is part of
the Apache package. It outputs the content of some environment variables as
plain text. Copy this script to either
/srv/www/cgi-bin/ or the script directory of your
virtual host (/srv/www/www.example.com/cgi-bin/) and name it
test.cgi. Edit the file to have
#!/bin/sh as the first line.
Files accessible by the Web server should be owned by the user
root. For additional information
see Section 24.8, “Avoiding Security Problems”. Because the Web server runs
with a different user, the CGI scripts must be world-executable and
world-readable. Change into the CGI directory and use the command
chmod 755 test.cgi to apply the proper permissions.
Now call http://localhost/cgi-bin/test.cgi or
http://www.example.com/cgi-bin/test.cgi. You should see the
“CGI/1.0 test script report”.
If you do not see the output of the test program but an error message instead, check the following:
Have you reloaded the server after having changed the
configuration?
If not, reload with systemctl reload apache2
If you have configured your custom CGI directory, is it
configured properly?
If in doubt, try the script within the default CGI directory
/srv/www/cgi-bin/ and call it with
http://localhost/cgi-bin/test.cgi.
Are the file permissions correct?
Change into the CGI directory and
execute ls -l test.cgi. The output should start with
-rwxr-xr-x 1 root root
Make sure that the script does not contain programming errors. If you
have not changed test.cgi, this should not be the
case, but if you are using your own programs, always make sure that they
do not contain programming errors.
Whenever sensitive data, such as credit card information, is transferred
between Web server and client, it is desirable to have a secure, encrypted
connection with authentication. mod_ssl provides
strong encryption using the secure sockets layer (SSL) and transport layer
security (TLS) protocols for HTTP communication between a client and the Web
server. Using SSL/TLS, a private connection between Web server and client is
established. Data integrity is ensured and client and server can
authenticate each other.
For this purpose, the server sends an SSL certificate that holds information proving the server's valid identity before any request to a URL is answered. In turn, this guarantees that the server is the uniquely correct end point for the communication. Additionally, the certificate generates an encrypted connection between client and server that can transport information without the risk of exposing sensitive, plain-text content.
mod_ssl does not implement the SSL/TLS protocols
itself, but acts as an interface between Apache and an SSL library. In
openSUSE Leap, the OpenSSL library is used. OpenSSL is automatically
installed with Apache.
The most visible effect of using mod_ssl with
Apache is that URLs are prefixed with https:// instead of
http://.
To use SSL/TLS with the Web server, you need to create an SSL certificate. This certificate is needed for the authorization between Web server and client, so that each party can clearly identify the other party. To ensure the integrity of the certificate, it must be signed by a party every user trusts.
There are three types of certificates you can create: a “dummy” certificate for testing purposes only, a self-signed certificate for a defined circle of users that trust you, and a certificate signed by an independent, publicly-known certificate authority (CA).
Creating a certificate is a two step process. First, a private key for the certificate authority is generated then the server certificate is signed with this key.
To learn more about concepts and definitions of SSL/TLS, refer to http://httpd.apache.org/docs/2.4/ssl/ssl_intro.html.
To generate a dummy certificate, call the script
/usr/bin/gensslcert. It creates or overwrites the files
listed below. Use gensslcert's optional switches to
fine-tune the certificate. Call /usr/bin/gensslcert
-h for more information.
/etc/apache2/ssl.crt/ca.crt
/etc/apache2/ssl.crt/server.crt
/etc/apache2/ssl.key/server.key
/etc/apache2/ssl.csr/server.csr
A copy of ca.crt is also placed at
/srv/www/htdocs/CA.crt for download.
A dummy certificate should never be used on a production system. Only use it for testing purposes.
If you are setting up a secure Web server for an intranet or for a defined circle of users, it is probably sufficient if you sign a certificate with your own certificate authority (CA). Note that visitors to such a site will see a warning like “this is an untrusted site”, as Web browsers do not recognize self-signed certificates.
Only use a self-signed certificate on a Web server that is accessed by people who know and trust you as a certificate authority. It is not recommended to use such a certificate for a public shop, for example.
First you need to generate a certificate signing request (CSR). You are
going to use openssl, with PEM as
the certificate format. During this step, you will be asked for a
passphrase, and to answer several questions. Remember the passphrase you
enter as you will need it in the future.
tux >sudoopenssl req -new > new.cert.csr Generating a 1024 bit RSA private key ..++++++ .........++++++ writing new private key to 'privkey.pem' Enter PEM pass phrase:1 Verifying - Enter PEM pass phrase:2 ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]:3 State or Province Name (full name) [Some-State]:4 Locality Name (eg, city) []:5 Organization Name (eg, company) [Internet Widgits Pty Ltd]:6 Organizational Unit Name (eg, section) []:7 Common Name (for example server FQDN, or YOUR name) []:8 Email Address []:9 Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []:10 An optional company name []:11
Fill in your passphrase, | |
...fill it in once more (and remember it). | |
Fill in your 2 letter country code, such as | |
Fill in the name of the state where you live. | |
Fill in the city name, such as | |
Fill in the name of the organization you work for. | |
Fill in your organization unit, or leave blank if you have none. | |
Fill in either the domain name of the server, or your first and last name. | |
Fill in your work e-mail address. | |
Leave the challenge password empty, otherwise you will need to enter it every time you restart the Apache Web server. | |
Fill in the optional company name, or leave blank. |
Now you can generate the certificate. You are going to use
openssl again, and the format of the certificate is the
default PEM.
Export the private part of the key to new.cert.key.
You will be prompted for the passphrase you entered when creating the
certificate signing request (CSR).
tux >sudoopenssl rsa -in privkey.pem -out new.cert.key
Generate the public part of the certificate according to the information
you filled out in the signing request. The -days option
specifies the length of time before the certificate expires. You can
revoke a certificate, or replace one before it expires.
tux >sudoopenssl x509 -in new.cert.csr -out new.cert.cert -req \ -signkey new.cert.key -days 365
Copy the certificate files to the relevant directories, so that the
Apache server can read them. Make sure that the private key
/etc/apache2/ssl.key/server.key is not
world-readable, while the public PEM certificate
/etc/apache2/ssl.crt/server.crt is.
tux >sudocp new.cert.cert /etc/apache2/ssl.crt/server.crttux >sudocp new.cert.key /etc/apache2/ssl.key/server.key
The last step is to copy the public certificate file from
/etc/apache2/ssl.crt/server.crt to a location where
your users can access it to incorporate it into the list of known and
trusted CAs in their Web browsers. Otherwise a browser complains that the
certificate was issued by an unknown authority.
There are several official certificate authorities that sign your certificates. The certificate is signed by a trustworthy third party, so can be fully trusted. Publicly operating secure Web servers usually have an officially signed certificate. A list of the most used Certificate Authorities (CAs) is available at https://en.wikipedia.org/wiki/Certificate_authority#Providers.
When requesting an officially signed certificate, you do not send a certificate to the CA. Instead, issue a Certificate Signing Request (CSR). To create a CSR, run the following command:
tux > openssl req -new -newkey rsa:2048 -nodes -keyout newkey.pem -out newreq.pemYou are asked to enter a distinguished name. This requires you to answer a few questions, such as country name or organization name. Enter valid data—everything you enter here later shows up in the certificate and is checked. You do not need to answer every question. If one does not apply to you or you want to leave it blank, use “.”. Common name is the name of the CA itself—choose a significant name, such as My company CA. Last, a challenge password and an alternative company name must be entered.
Find the CSR in the directory from which you called the script. The file
is named newreq.pem.
The default port for SSL and TLS requests on the Web server side is 443. There is no conflict between a “regular” Apache listening on port 80 and an SSL/TLS-enabled Apache listening on port 443. In fact, HTTP and HTTPS can be run with the same Apache instance. Usually separate virtual hosts are used to dispatch requests to port 80 and port 443 to separate virtual servers.
Do not forget to open the firewall for SSL-enabled Apache on port 443.
This can be done with firewalld as described in
Section 15.4.1, “Configuring the Firewall on the Command Line”.
The SSL module is enabled by default in the global server configuration.
In case it has been disabled on your host, activate it with the following
command: a2enmod ssl. To finally enable SSL, the server
needs to be started with the flag “SSL”. To do so, call
a2enflag SSL (case-sensitive!). If you have chosen to
encrypt your server certificate with a password, you should also increase
the value for APACHE_TIMEOUT in
/etc/sysconfig/apache2, so you have enough time to
enter the passphrase when Apache starts. Restart the server to make these
changes active. A reload is not sufficient.
The virtual host configuration directory contains a template
/etc/apache2/vhosts.d/vhost-ssl.template with
SSL-specific directives that are extensively documented. Refer to
Section 24.2.2.1, “Virtual Host Configuration” for the general
virtual host configuration.
To get started, copy the template to
/etc/apache2/vhosts.d/MYSSL-HOST.conf
and edit it. Adjusting the values for the following directives should be
sufficient:
DocumentRoot
ServerName
ServerAdmin
ErrorLog
TransferLog
By default it is not possible to run multiple SSL-enabled virtual hosts on a server with only one IP address. Name-based virtual hosting requires that Apache knows which server name has been requested. The problem with SSL connections is, that such a request can only be read after the SSL connection has already been established (by using the default virtual host). As a result, users will receive a warning message stating that the certificate does not match the server name.
openSUSE Leap comes with an extension to the SSL protocol called Server Name Indication (SNI) addresses this issue by sending the name of the virtual domain as part of the SSL negotiation. This enables the server to “switch” to the correct virtual domain early and present the browser the correct certificate.
SNI is enabled by default on openSUSE Leap. To enable Name-Based Virtual
Hosts for SSL, configure the server as described in
Section 24.2.2.1.1, “Name-Based Virtual Hosts”
(note that you need to use port 443 rather than port
80 with SSL).
SNI must also be supported on the client side. However, SNI is supported by most browsers, except for certain older browsers. For more information, see https://en.wikipedia.org/wiki/Server_Name_Indication#Support.
To configure handling of non-SNI capable browsers, use the directive
SSLStrictSNIVHostCheck. When set to
on in the server configuration, non-SNI capable
browser will be rejected for all virtual hosts. When set to
on within a VirtualHost
directive, access to this particular host will be rejected.
When set to off in the server configuration, the
server will behave as if not having SNI support. SSL requests will be
handled by the first virtual host defined (for port
443).
As of openSUSE® Leap 42.1, you can run multiple Apache instances on the same server. This has several advantages over running multiple virtual hosts (see Section 24.2.2.1, “Virtual Host Configuration”):
When a virtual host needs to be disabled for some time, you need to change the Web server configuration and restart it so that the change takes effect.
In case of problems with one virtual host, you need to restart all of them.
You can run the default Apache instance as usual:
tux >sudosystemctl start apache2
It reads the default /etc/sysconfig/apache2 file. If
the file is not present, or it is present but it does not set the
APACHE_HTTPD_CONF variable, it reads
/etc/apache2/httpd.conf.
To activate another Apache instance, run:
tux >sudosystemctl start apache2@INSTANCE_NAME
For example:
tux >sudosystemctl start apache2@example_web.org
By default, the instance uses
/etc/apache2@example_web.org/httpd.conf as a main
configuration file, which can be overwritten by setting
APACHE_HTTPD_CONF in
/etc/sysconfig/apache2@example_web.org.
An example to set up an additional instance of Apache follows. Note that you
need to execute all the commands as root.
Create a new configuration file based on
/etc/sysconfig/apache2, for example
/etc/sysconfig/apache2@example_web.org:
tux >sudocp /etc/sysconfig/apache2 /etc/sysconfig/apache2@example_web.org
Edit the file /etc/sysconfig/apache2@example_web.org
and change the line containing
APACHE_HTTPD_CONF
to
APACHE_HTTPD_CONF="/etc/apache2/httpd@example_web.org.conf"
Create the file
/etc/apache2/httpd@example_web.org.conf based on
/etc/apache2/httpd.conf.
tux >sudocp /etc/apache2/httpd.conf /etc/apache2/httpd@example_web.org.conf
Edit /etc/apache2/httpd@example_web.org.conf and
change
Include /etc/apache2/listen.conf
to
Include /etc/apache2/listen@example_web.org.conf
Review all the directives and change them to fit your needs. You will probably want to change
Include /etc/apache2/global.conf
and create new global@example_web.org.conf for each
instance. We suggest to change
ErrorLog /var/log/apache2/error_log
to
ErrorLog /var/log/apache2/error@example_web.org_log
to have separate logs for each instance.
Create /etc/apache2/listen@example_web.org.conf based
on /etc/apache2/listen.conf.
tux >sudocp /etc/apache2/listen.conf /etc/apache2/listen@example_web.org.conf
Edit /etc/apache2/listen@example_web.org.conf and
change
Listen 80
to the port number you want the new instance to run on, for example 82:
Listen 82
To run the new Apache instance over a secured protocol (see Section 24.6, “Setting Up a Secure Web Server with SSL”), change also the line
Listen 443
for example to
Listen 445
Start the new Apache instance:
tux >sudosystemctl start apache2@example_web.org
Check if the server is running by pointing your Web browser at
http://server_name:82. If you previously changed the
name of the error log file for the new instance, you can check it:
tux >sudotail -f /var/log/apache2/error@example_web.org_log
Here are several points to consider when setting up more Apache instances on the same server:
The file
/etc/sysconfig/apache2@INSTANCE_NAME
can include the same variables as
/etc/sysconfig/apache2, including module loading and
MPM setting.
The default Apache instance does not need to be running while other instances run.
The Apache helper utilities a2enmod,
a2dismod and apachectl operate on
the default Apache instance if not specified otherwise with the
HTTPD_INSTANCE environment variable. The
following example
tux >sudoexport HTTPD_INSTANCE=example_web.orgtux >sudoa2enmod access_compattux >sudoa2enmod statustux >sudoapachectl start
will add access_compat and
status modules to the
APACHE_MODULES variable of
/etc/sysconfig/apache2@example_web.org, and then
start the example_web.org instance.
A Web server exposed to the public Internet requires an ongoing administrative effort. It is inevitable that security issues appear, both related to the software and to accidental misconfiguration. Here are some tips for how to deal with them.
If there are vulnerabilities found in the Apache software, a security advisory will be issued by SUSE. It contains instructions for fixing the vulnerabilities, which in turn should be applied when possible. The SUSE security announcements are available from the following locations:
Web Page. http://www.suse.com/support/security/
Mailing List Archive. http://lists.opensuse.org/opensuse-security-announce/
List of Security Announcements. http://www.suse.com/support/update/
By default in openSUSE Leap, the DocumentRoot
directory /srv/www/htdocs and the CGI directory
/srv/www/cgi-bin belong to the user and group
root. You should not change these permissions. If
the directories are writable for all, any user can place files into them.
These files might then be executed by Apache with the permissions of
wwwrun, which may give the user unintended access
to file system resources. Use subdirectories of
/srv/www to place the
DocumentRoot and CGI directories for your virtual
hosts and make sure that directories and files belong to user and group
root.
By default, access to the whole file system is denied in
/etc/apache2/httpd.conf. You should never overwrite
these directives, but specifically enable access to all directories Apache
should be able to read. For details, see
Section 24.2.2.1.3, “Basic Virtual Host Configuration”.
In doing so, ensure that no critical files, such as password or system
configuration files, can be read from the outside.
Interactive scripts in Perl, PHP, SSI, or any other programming language can essentially run arbitrary commands and therefore present a general security issue. Scripts that will be executed from the server should only be installed from sources the server administrator trusts—allowing users to run their own scripts is generally not a good idea. It is also recommended to do security audits for all scripts.
To make the administration of scripts as easy as possible, it is common
practice to limit the execution of CGI scripts to specific directories
instead of globally allowing them. The directives
ScriptAlias and Option
ExecCGI are used for configuration. The openSUSE Leap default
configuration does not allow execution of CGI scripts from everywhere.
All CGI scripts run as the same user, so different scripts can potentially conflict with each other. The module suEXEC lets you run CGI scripts under a different user and group.
When enabling user directories (with mod_userdir
or mod_rewrite) you should strongly consider not
allowing .htaccess files, which would allow users to
overwrite security settings. At least you should limit the user's
engagement by using the directive AllowOverRide.
In openSUSE Leap, .htaccess files are enabled by
default, but the user is not allowed to overwrite any
Option directives when using
mod_userdir (see the
/etc/apache2/mod_userdir.conf configuration file).
If Apache does not start, the Web page is not accessible, or users cannot connect to the Web server, it is important to find the cause of the problem. Here are some typical places to look for error explanations and important things to check:
apache2.service
subcommand:
Instead of starting and stopping the Web server with the binary
/usr/sbin/apache2ctl, rather use the
systemctl commands instead (described in Section 24.3, “Starting and Stopping Apache”). systemctl status
apache2 is verbose about errors, and it even provides tips and
hints for fixing configuration errors.
In case of both fatal and nonfatal errors, check the Apache log files for
causes, mainly the error log file located at
/var/log/apache2/error_log by default. Additionally,
you can control the verbosity of the logged messages with the
LogLevel directive if more detail is needed in
the log files.
Watch the Apache log messages with the command tail -F
/var/log/apache2/MY_ERROR_LOG.
Then run
systemctl restart apache2. Now, try to connect with a
browser and check the output.
A common mistake is to not open the ports for Apache in the firewall configuration of the server. If you configure Apache with YaST, there is a separate option available to take care of this specific issue (see Section 24.2.3, “Configuring Apache with YaST”). If you are configuring Apache manually, open firewall ports for HTTP and HTTPS via YaST's firewall module.
If the error cannot be tracked down with any of these, check the online Apache bug database at http://httpd.apache.org/bug_report.html. Additionally, the Apache user community can be reached via a mailing list available at http://httpd.apache.org/userslist.html.
The package apache2-doc contains the complete
Apache manual in various localizations for local installation and reference.
It is not installed by default—the quickest way to install it is to
use the command zypper in apache2-doc. having been
installed, the Apache manual is available at
http://localhost/manual/. You may also access it on the
Web at http://httpd.apache.org/docs-2.4/. SUSE-specific
configuration hints are available in the directory
/usr/share/doc/packages/apache2/README.*.
For a list of new features in Apache 2.4, refer to http://httpd.apache.org/docs/2.4/new_features_2_4.html. Information about upgrading from version 2.2 to 2.4 is available at http://httpd.apache.org/docs-2.4/upgrading.html.
More information about external Apache modules that are briefly described in Section 24.4.5, “External Modules” is available at the following locations:
mod_apparmor
mod_auth_kerb
mod_perl
mod_php5
mod_python
mod_security
More information about developing Apache modules or about getting involved in the Apache Web server project are available at the following locations:
Using the YaST module, you can configure your machine to function as an FTP (File Transfer Protocol) server. Anonymous and/or authenticated users can connect to your machine and download files using the FTP protocol. Depending on the configuration, they can also upload files to the FTP server. YaST uses vsftpd (Very Secure FTP Daemon).
If the YaST FTP Server module is not available in your system, install the
yast2-ftp-server package.
To configure the FTP server using YaST, follow these steps:
Open the YaST control center and choose › or run the
yast2 ftp-server command as root.
If there is not any FTP server installed in your system, you will be asked which server to install when the YaST FTP Server module starts. Choose the vsftpd server and confirm the dialog.
In the dialog, configure the options for starting of the FTP server. For more information, see Section 25.1, “Starting the FTP Server”.
In the dialog, configure FTP directories, welcome message, file creation masks and other parameters. For more information, see Section 25.2, “FTP General Settings”.
In the dialog, set the parameters that affect the load on the FTP server. For more information, see Section 25.3, “FTP Performance Settings”.
In the dialog, set whether the FTP server should be available for anonymous and/or authenticated users. For more information, see Section 25.4, “Authentication”.
In the dialog, configure the operation mode of the FTP server, SSL connections and firewall settings. For more information, see Section 25.5, “Expert Settings”.
Press to save the configurations.
In the frame of the dialog set the way the FTP server is started up. You can choose between starting the server automatically during the system boot and starting it manually. If the FTP server should be started only after an FTP connection request, choose .
The current status of the FTP server is shown in the frame of the dialog. Start the FTP server by clicking . To stop the server, click . After having changed the settings of the server click . Your configurations will be saved by leaving the configuration module with .
The frame of the dialog shows which FTP server is used: either vsftpd or pure-ftpd. If both servers are installed, you can switch between them—the current configuration will automatically be converted.
In the frame of the dialog you can set the which is shown after connecting to the FTP server.
If you check the option, all local users will be placed in a chroot jail in their home directory after login. This option has security implications, especially if the users have upload permission or shell access, so be careful enabling this option.
If you check the option, all FTP requests and responses are logged.
You can limit permissions of files created by anonymous and/or authenticated
users with umask. Set the file creation mask for anonymous users in
and the file creation mask for
authenticated users in . The
masks should be entered as octal numbers with a leading zero. For more
information about umask, see the umask man page
(man 1p umask).
In the frame set the directories used for
anonymous and authorized users. With , you can
select a directory to be used from the local file system. The default FTP
directory for anonymous users is /srv/ftp. Note that
vsftpd does not allow this directory to be writable for all users. The
subdirectory upload with write permissions for
anonymous users is created instead.
The pure-ftpd server allows the FTP directory for anonymous users to be writable. When switching between servers, make sure you remove the write permissions in the directory that was used with pure-ftpd before switching back to the vsftpd server.
In the dialog set the parameters which affect the load on the FTP server. is the maximum time (in minutes) the remote client may spend between FTP commands. In case of longer inactivity, the remote client is disconnected. determines the maximum number of clients which can be connected from a single IP address. determines the maximum number of clients which may be connected. Any additional clients will get an error message.
The maximum data transfer rate (in KB/s) is set in for local authenticated users, and in for anonymous clients respectively. The default value for the
rate settings is 0, which means unlimited data transfer
rate.
In the frame of the dialog, you can set which users are allowed to access your FTP server. You can choose between the following options: granting access to anonymous users only, to authenticated users only (with accounts on the system) or to both types of users.
To allow users to upload files to the FTP server, check in the frame of the dialog. Here you can allow uploading or creating directories even for anonymous users by checking the respective box.
If a vsftpd server is used and you want anonymous users to be able to upload files or create directories, a subdirectory with writing permissions for all users needs to be created in the anonymous FTP directory.
An FTP server can run in active or in passive mode. By default the server runs in passive mode. To switch into active mode, uncheck option in dialog. You can also change the range of ports on the server used for the data stream by tweaking the and options.
If you want encrypted communication between clients and the server, you can . Check the versions of the protocol to be supported and specify the DSA certificate to be used for SSL encrypted connections.
If your system is protected by a firewall, check to enable a connection to the FTP server.
For more information about the FTP server read the manual pages of
vsftpd and vsftpd.conf.
Squid is a widely-used proxy cache for Linux and Unix platforms. This means that it stores requested Internet objects, such as data on a Web or FTP server, on a machine that is closer to the requesting workstation than the server. It can be set up in multiple hierarchies to assure optimal response times and low bandwidth usage, even in modes that are transparent to end users.
cachemgr.cgi)Squid acts as a proxy cache. It redirects object requests from clients (in this case, from Web browsers) to the server. When the requested objects arrive from the server, it delivers the objects to the client and keeps a copy of them in the hard disk cache. An advantage of caching is that several clients requesting the same object can be served from the hard disk cache. This enables clients to receive the data much faster than from the Internet. This procedure also reduces the network traffic.
Along with actual caching, Squid offers a wide range of features:
Distributing load over intercommunicating hierarchies of proxy servers
Defining strict access control lists for all clients accessing the proxy
Allowing or denying access to specific Web pages using other applications
Generating statistics about frequently-visited Web pages for the assessment of surfing habits
Squid is not a generic proxy. It normally proxies only HTTP connections. It supports the protocols FTP, Gopher, SSL, and WAIS, but it does not support other Internet protocols, such as the news protocol, or video conferencing protocols. Because Squid only supports the UDP protocol to provide communication between different caches, many multimedia programs are not supported.
As a proxy cache, Squid can be used in several ways. When combined with a firewall, it can help with security. Multiple proxies can be used together. It can also determine what types of objects should be cached and for how long.
It is possible to use Squid together with a firewall to secure internal networks from the outside using a proxy cache. The firewall denies all clients access to external services except Squid. All Web connections must be established by the proxy. With this configuration, Squid completely controls Web access.
If the firewall configuration includes a DMZ, the proxy should operate within this zone. Section 26.6, “Configuring a Transparent Proxy” describes how to implement a transparent proxy. This simplifies the configuration of the clients, because in this case, they do not need any information about the proxy.
Several instances of Squid can be configured to exchange objects between them. This reduces the total system load and increases the chances of retrieving an object from the local network. It is also possible to configure cache hierarchies, so a cache can forward object requests to sibling caches or to a parent cache—causing it to request objects from another cache in the local network or directly from the source.
Choosing the appropriate topology for the cache hierarchy is very important, because it is not desirable to increase the overall traffic on the network. For a very large network, it would make sense to configure a proxy server for every subnet and connect them to a parent proxy, which in turn is connected to the proxy cache of the ISP.
All this communication is handled by ICP (Internet cache protocol) running on top of the UDP protocol. Data transfers between caches are handled using HTTP (hypertext transmission protocol) based on TCP.
To find the most appropriate server from which to request objects, a cache
sends an ICP request to all sibling proxies. The sibling proxies answer
these requests via ICP responses. If the object was detected, they use the
code HIT, if not, they use MISS.
If multiple HIT responses were found, the proxy server
decides from which server to download, depending on factors such as which
cache sent the fastest answer or which one is closer. If no satisfactory
responses are received, the request is sent to the parent cache.
To avoid duplication of objects in different caches in the network, other ICP protocols are used, such as CARP (cache array routing protocol) or HTCP (hypertext cache protocol). The more objects maintained in the network, the greater the possibility of finding the desired one.
Many objects available in the network are not static, such as dynamically generated pages and TLS/SSL-encrypted content. Objects like these are not cached because they change each time they are accessed.
To determine how long objects should remain in the cache, objects are assigned one of several states. Web and proxy servers find out the status of an object by adding headers to these objects, such as “Last modified” or “Expires” and the corresponding date. Other headers specifying that objects must not be cached can be used as well.
Objects in the cache are normally replaced, because of a lack of free disk space, using algorithms such as LRU (last recently used). This means that the proxy expunges those objects that have not been requested for the longest time.
System requirements largely depend on the maximum network load that the system must bear. Therefore, examine load peaks, as during those times, load might be more than four times the day's average. When in doubt, slightly overestimate the system's requirements. Having Squid working close to the limit of its capabilities can lead to a severe loss in quality of service. The following sections point to system factors in order of significance:
RAM size
CPU speed/physical CPU cores
Size of the disk cache
Hard disks/SSDs and their architecture
The amount of memory (RAM) required by Squid directly correlates with the number of objects in the cache. Random access memory is much faster than a hard disk/SSD. Therefore, it is very important to have sufficient memory for the Squid process, because system performance is dramatically reduced if the swap disk is used.
Squid also stores cache object references and frequently requested objects in the main memory to speed up retrieval of this data. In addition to that, there is other data that Squid needs to keep in memory, such as a table with all the IP addresses handled, an exact domain name cache, the most frequently requested objects, access control lists, buffers, and more.
Squid is tuned to work best with lower processor core counts (4–8 physical cores), with each providing high performance. Technologies providing virtual cores such as hyperthreading can hurt performance.
To make the best use of multiple CPU cores, it is necessary to set up multiple worker threads writing to different caching devices. By default, multi-core support is mostly disabled.
In a small cache, the probability of a HIT (finding the
requested object already located there) is small, because the cache is
easily filled and less requested objects are replaced by newer ones. If,
for example, 1 GB is available for the cache and the users use up only
10 MB per day surfing, it would take more than one hundred days to
fill the cache.
The easiest way to determine the necessary cache size is to consider the maximum transfer rate of the connection. With a 1 Mbit/s connection, the maximum transfer rate is 128 KB/s. If all this traffic ended up in the cache, in one hour it would add up to 460 MB. Assuming that all this traffic is generated in only eight working hours, it would reach 3.6 GB in one day. Because the connection is normally not used to its upper volume limit, it can be assumed that the total data volume handled by the cache is approximately 2 GB. Hence, in this example, 2 GB of disk space is required for Squid to keep one day's worth of browsing data cached.
Speed plays an important role in the caching process, so this factor deserves special attention. For hard disks/SSDs, this parameter is described as random seek time or random read performance, measured in milliseconds. Because the data blocks that Squid reads from or writes to the hard disk/SSD tend to be small, the seek time/read performance of the hard disk/SSD is more important than its data throughput.
For use as a proxy, hard disks with high rotation speeds or SSDs are the best choice. When using hard disks, it can be better to use multiple smaller hard disks, each with a single cache directory to avoid excessive read times.
Using a RAID system allows increasing reliability at expense of speed. However, for performance reasons, avoid (software) RAID5 and similar settings.
File system choice is usually not decisive. However, using the mount option
noatime can improve performance—Squid provides its
own time stamps and thus does not need the file system to track access
times.
If not already installed, install the package
squid
. squid is not among the packages installed by
default on openSUSE® Leap.
Squid is already preconfigured in openSUSE Leap, you can start it directly
after the installation. To ensure a smooth start-up, the network should be
configured in a way that at least one name server and the Internet can be
reached. Problems can arise if a dial-up connection is used with a dynamic
DNS configuration. In this case, at least the name server should be
specified, because Squid does not start if it does not detect a DNS server
in /etc/resolv.conf.
To start Squid, use:
tux >sudosystemctl start squid
If you want Squid to start together with the system, enable the service
with systemctl enable squid.
To check whether Squid is running, choose one of the following ways:
Using systemctl:
tux >systemctl status squid
The output of this command should indicate that Squid is
loaded and active (running).
Using Squid itself:
tux >sudosquid -k check | echo $?
The output of this command should be 0, but may
contain additional warnings or messages.
To test the functionality of Squid on the local system, choose one of the following ways:
To test, you can use squidclient, a command-line tool
that can output the response to a Web request, similar to
wget or curl.
Unlike those tools, squidclient will automatically
connect to the default proxy setup of Squid,
localhost:3128. However, if you changed the
configuration of Squid, you need to configure
squidclient to use different settings using command
line options. For more information, see squidclient
--help.
squidclient #tux >squidclient http://www.example.orgHTTP/1.1 200 OK Cache-Control: max-age=604800 Content-Type: text/html Date: Fri, 22 Jun 2016 12:00:00 GMT Expires: Fri, 29 Jun 2016 12:00:00 GMT Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT Server: ECS (iad/182A) Vary: Accept-Encoding X-Cache: HIT x-ec-custom-error: 1 Content-Length: 1270 X-Cache: MISS from moon1 X-Cache-Lookup: MISS from moon:3128 Via: 1.1 moon (squid/3.5.16)2 Connection: close <!doctype html> <html> <head> <title>Example Domain</title> [...] </body> </html>
The output shown in Example 26.1, “A Request With squidclient” can be
split into two parts:
The protocol headers of the response: the lines before the blank line.
The actual content of the response: the lines after the blank line.
To verify that Squid is used, refer to the selected header lines:
The value of the header
The example above contains two | |
The value of the header |
Using a browser: Set up localhost as the proxy and
3128 as the port. You can then load a page and check the
response headers in the panel of the browser's
Inspector or Developer Tools.
The headers should be reproduced similarly to the way shown in
Example 26.1, “A Request With squidclient”.
To allow users from the local system and other systems to access Squid and
the Internet, change the entry in the configuration files
/etc/squid/squid.conf from http_access deny
all to http_access allow all. However, in doing
so, consider that Squid is made completely accessible to anyone by this
action. Therefore, define ACLs (access control lists) that control access
to the proxy. After modifying the configuration file, Squid must be
reloaded or restarted. For more information on ACLs, see
Section 26.5.2, “Options for Access Controls”.
If Squid quits after a short period of time even though it was started
successfully, check whether there is a faulty name server entry or whether
the /etc/resolv.conf file is missing. Squid logs the
cause of a start-up failure in the file
/var/log/squid/cache.log.
To reload Squid, choose one of the following ways:
Using systemctl:
tux >sudosystemctlreload squid
or
tux >sudosystemctlrestart squid
Using YaST:
In the Squid module, click the . button.
To stop Squid, choose one of the following ways:
Using systemctl:
tux >sudosystemctlstop squid
Using YaST
In the Squid module click the . button.
Shutting down Squid can take a while, because Squid waits up to half a minute
before dropping the connections to the clients and writing its data to the disk
(see shutdown_lifetime option in
/etc/squid/squid.conf),
Terminating Squid with kill or
killall can damage the cache. To be able to restart
Squid, damaged caches must be deleted.
Removing Squid from the system does not remove the cache hierarchy and log
files. To remove these, delete the /var/cache/squid
directory manually.
Setting up a local DNS server makes sense even if it does not manage its own domain. It then simply acts as a caching-only name server and is also able to resolve DNS requests via the root name servers without requiring any special configuration (see Section 19.4, “Starting the BIND Name Server”). How this can be done depends on whether you chose dynamic DNS during the configuration of the Internet connection.
Normally, with dynamic DNS, the DNS server is set by the provider during
the establishment of the Internet connection and the local
/etc/resolv.conf file is adjusted automatically.
This behavior is controlled in the
/etc/sysconfig/network/config file with the
NETCONFIG_DNS_POLICY sysconfig variable. Set
NETCONFIG_DNS_POLICY to ""
with the YaST sysconfig editor.
Then, add the local DNS server in the
/etc/resolv.conf file with the IP address
127.0.0.1 for
localhost. This way, Squid
can always find the local name server when it starts.
To make the provider's name server accessible, specify it in the
configuration file /etc/named.conf under
forwarders along with its IP address. With
dynamic DNS, this can be achieved automatically when establishing the
connection by setting the sysconfig variable
NETCONFIG_DNS_POLICY to auto.
With static DNS, no automatic DNS adjustments take place while
establishing a connection, so there is no need to change any sysconfig
variables. However, you must specify the local DNS server in the file
/etc/resolv.conf as described in
Dynamic DNS. Additionally, the provider's
static name server must be specified manually in the
/etc/named.conf file under
forwarders along with its IP address.
If you have a firewall running, make sure DNS requests can pass it.
The YaST Squid module contains the following tabs:
Specifies how Squid is started and which Firewall port is open on which interfaces.
Define all ports where Squid will listen for clients' http requests.
Defines how Squid treats objects in the cache.
Defines settings in regard to cache memory, maximum and minimum object size, and more.
Defines the top-level directory where Squid stores all cache swap files.
Controls the access to the Squid server via ACL groups.
Define paths to access, cache, and cache store log files in addition with connection timeouts and client lifetime.
Sets language and mail address of administrator.
All Squid proxy server settings are made in the
/etc/squid/squid.conf file. To start Squid for the
first time, no changes are necessary in this file, but external clients are
initially denied access. The proxy is available for
localhost. The default port
is 3128. The preinstalled configuration file
/etc/squid/squid.conf provides detailed information
about the options and many examples.
Many entries are commented and therefore begin with the comment character
#. The relevant specifications can be found at the end of
the line.
The given values usually correlate with the default values, so removing the
comment signs without changing any of the parameters usually has no effect.
If possible, leave the commented lines as they are and insert the options
along with the modified values in the line below. This way, the default
values may easily be recovered and compared with the changes.
If you have updated from an earlier Squid version, it is recommended to
edit the new /etc/squid/squid.conf and only apply the
changes made in the previous file.
Sometimes, Squid options are added, removed, or modified. Therefore, if you
try to use the old squid.conf, Squid might stop
working properly.
The following is a list of a selection of configuration options for
Squid. It is not exhaustive. The Squid package contains a full, lightly
documented list of options in
/etc/squid/squid.conf.documented.
http_port PORT
This is the port on which Squid listens for client requests. The default
port is 3128, but 8080 is also common.
cache_peer HOST_NAME
TYPE PROXY_PORT ICP_PORT
This option allows creating a network of caches that work together. The
cache peer is a computer that also hosts a network cache and stands in a
relationship to your own. The type of relationship is specified as the
TYPE. The type can either be
parent or sibling.
As the HOST_NAME, specify the name or IP
address of the proxy to use. For PROXY_PORT,
specify the port number for use in a browser (usually
8080). Set ICP_PORT to
7 or, if the ICP port of the parent is not known and
its use is irrelevant to the provider, to 0.
To make Squid behave like a Web browser instead of like a proxy,
prohibit the use of the ICP protocol. You can do so by appending the
options default and no-query.
cache_mem SIZE
This option defines the amount of memory Squid can use for very popular
replies. The default is 8 MB. This does not specify the
memory usage of Squid and may be exceeded.
cache_dir
STORAGE_TYPE CACHE_DIRECTORY
CACHE_SIZE
LEVEL_1_DIRECTORIES
LEVEL_2_DIRECTORIES
The option cache_dir defines the directory for the
disk cache. In the default configuration on openSUSE Leap, Squid does
not create a disk cache.
The placeholder STORAGE_TYPE can be one of the following:
Directory-based storage types: ufs,
aufs (the default), diskd. All
three are variations of the storage format ufs.
However, while ufs runs as part of the core Squid
thread, aufs runs in a separate thread, and
diskd uses a separate process. This means that the
latter two types avoid blocking Squid because of disk I/O.
Database-based storage systems: rock. This storage
format relies on a single database file, in which each object takes up
one or more memory units of a fixed size (“slots”).
In the following, only the parameters for storage types based on
ufs will be discussed. rock has
somewhat different parameters.
The CACHE_DIRECTORY is the directory for the
disk cache. By default, that is /var/cache/squid.
CACHE_SIZE is the maximum size of that
directory in megabytes; by default, this is set to 100 MB. Set it
to between 50% and a maximum of 80% of available disk space.
The final two values, LEVEL_1_DIRECTORIES and LEVEL_2_DIRECTORIES specify how many subdirectories are created in the CACHE_DIRECTORY. By default, 16 subdirectories are created at the first level below CACHE_DIRECTORY and 256 within each of these. These values should only be increased with caution, because creating too many directories can lead to performance problems.
If you have several disks that share a cache, specify several
cache_dir lines.
cache_access_log LOG_FILE
, cache_log LOG_FILE
, cache_store_log LOG_FILE
These three options specify the paths where Squid logs all its actions. Normally, nothing needs to be changed here. If Squid is burdened by heavy usage, it might make sense to distribute the cache and the log files over several disks.
client_netmask NETMASK
This option allows masking IP addresses of clients in the log files by
applying a subnet mask. For example, to set the last digit of the IP
address to 0, specify
255.255.255.0.
ftp_user E-MAIL
This option allows setting the password that Squid should use for anonymous FTP login. Specify a valid e-mail address here, because some FTP servers check these for validity.
cache_mgr E-MAIL
If it unexpectedly crashes, Squid sends a message to this e-mail address. The default is webmaster.
logfile_rotate VALUE
If you run squid -k rotate,
squid can rotate log files. The files are numbered in
this process and, after reaching the specified value, the oldest file is
overwritten. The default value is 10 which rotates log
files with the numbers 0 to 9.
However, on openSUSE Leap, rotating log files is performed automatically
using logrotate and the
configuration file /etc/logrotate.d/squid.
append_domain DOMAIN
Use append_domain to specify which domain to append automatically when none is given. Usually, your own domain is specified here, so specifying www in the browser accesses your own Web server.
forwarded_for STATE
If this option is set to on, it adds a line to the
header similar to this:
X-Forwarded-For: 192.168.0.1
If you set this option to off, Squid removes the IP
address and the system name of the client from HTTP requests.
negative_ttl TIME
, negative_dns_ttl TIME
If these options are set, Squid will cache some types of failures, such
as 404 responses. It will then refuse to issue new
requests, even if the resource would be available then.
By default, negative_ttl is set to
0, negative_dns_ttl is set to
1 minutes.
This means that negative responses to Web requests are not cached by
default, while negative responses to DNS requests are cached for 1
minute.
never_direct allow ACL_NAME
To prevent Squid from taking requests directly from the Internet, use
the option never_direct to force connection to
another proxy. This must have previously been specified in
cache_peer. If all is specified as
the ACL_NAME, all requests are forwarded
directly to the parent. This can be necessary, for
example, if you are using a provider that dictates the use of its
proxies or denies its firewall direct Internet access.
Squid provides a detailed system for controlling the access to the proxy.
These Access Control Lists (ACL) are lists with rules that are processed
sequentially. ACLs must be defined before they can be used. Some default
ACLs, such as all and localhost,
already exist. However, the mere definition of an ACL does not mean that it
is actually applied. This only happens when there is a corresponding
http_access rule.
The syntax for the option acl is as follows:
acl ACL_NAME TYPE DATA
The placeholders within this syntax stand for the following:
The name ACL_NAME can be chosen arbitrarily.
For TYPE, select from a variety of different
options which can be found in the ACCESS CONTROLS
section in the /etc/squid/squid.conf file.
The specification for DATA depends on the individual ACL type and can also be read from a file. For example, “via” host names, IP addresses, or URLs.
To add rules in the YaST squid module, open the module and click the tab. Click under the ACL Groups list and enter the name of your rule, the type, and its parameters.
For more information on types of ACL rules, see the Squid documentation at http://www.squid-cache.org/Versions/v3/3.5/cfgman/acl.html.
acl mysurfers srcdomain .example.com 1 acl teachers src 192.168.1.0/255.255.255.0 2 acl students src 192.168.7.0-192.168.9.0/255.255.255.0 3 acl lunch time MTWHF 12:00-15:00 4
This ACL defines | |
This ACL defines | |
This ACL defines | |
This ACL defines |
http_access defines who is allowed to use the proxy and
who can access what on the Internet. For this, ACLs must be defined.
localhost and all have already
been defined above for which you can deny or allow access via
deny or allow. A list containing
any number of http_access entries can be created,
processed from top to bottom. Depending on which occurs first, access is
allowed or denied to the respective URL. The last entry should always be
http_access deny all. In the following example,
localhost has free access to everything while all
other hosts are denied access completely:
http_access allow localhost http_access deny all
In another example using these rules, the group
teachers always has access to
the Internet. The group
students only has access
between Monday and Friday during lunch time:
http_access deny localhost http_access allow teachers http_access allow students lunch time http_access deny all
For readability, within the configuration file
/etc/squid/squid.conf, specify all
http_access options as a block.
url_rewrite_program PATH
With this option, specify a URL rewriter.
auth_param basic program
PATH
If users must be authenticated on the proxy, set a corresponding
program, such as /usr/sbin/pam_auth. When accessing
pam_auth for the first time, the user sees a login
window in which they need to specify a user name and a password. In
addition, you need an ACL, so only clients with a valid login can use
the Internet:
acl password proxy_auth REQUIRED http_access allow password http_access deny all
In the acl proxy_auth option, using
REQUIRED means that all valid user names are
accepted. REQUIRED can also be replaced with a list
of permitted user names.
ident_lookup_access allow
ACL_NAME
With this option, have an ident request run to find each user's identity
for all clients defined by an ACL of the type src.
Alternatively, use this for all clients, apply the predefined ACL
all as the ACL_NAME.
All clients covered by ident_lookup_access must run an
ident daemon. On Linux, you can use
pidentd (package
pidentd
) as the ident daemon. For other operating systems, free software is
usually available. To ensure that only clients with a successful ident
lookup are permitted, define a corresponding ACL:
acl identhosts ident REQUIRED http_access allow identhosts http_access deny all
In the acl identhosts ident option, using
REQUIRED means that all valid user names are
accepted. REQUIRED can also be replaced with a list
of permitted user names.
Using ident can slow down access time, because ident
lookups are repeated for each request.
The usual way of working with proxy servers is as follows: the Web browser sends requests to a certain port of the proxy server and the proxy always provides these required objects, regardless of whether they are in its cache. However, in some cases the transparent proxy mode of Squid makes sense:
If, for security reasons, it is recommended that all clients use a proxy to surf the Internet.
If all clients must use a proxy, regardless of whether they are aware of it.
If the proxy in a network is moved, but the existing clients need to retain their old configuration.
A transparent proxy intercepts and answers the requests of the Web browser, so the Web browser receives the requested pages without knowing where they are coming from. As the name indicates, the entire process is transparent to the user.
In /etc/squid/squid.conf, on the line of the option
http_port add the parameter
transparent:
http_port 3128 transparent
Restart Squid:
tux >sudosystemctl restart squid
Set up SuSEfirewall2 to redirect HTTP traffic to the port given in
http_proxy (in the example above, that was port 3128). To
do so, edit the configuration file
/etc/sysconfig/SuSEfirewall2.
This example assumes that you are using the following devices:
Device pointing to the Internet: FW_DEV_EXT="eth1"
Device pointing to the network: FW_DEV_INT="eth0"
Define ports and services (see /etc/services) on the
firewall that are accessed from untrusted (external) networks such as the
Internet. In this example, only Web services are offered to the outside:
FW_SERVICES_EXT_TCP="www"
Define ports or services (see /etc/services) on the
firewall that are accessed from the secure (internal) network, both via
TCP and UDP:
FW_SERVICES_INT_TCP="domain www 3128" FW_SERVICES_INT_UDP="domain"
This allows accessing Web services and Squid (whose default port is
3128). The service “domain” stands for DNS
(domain name service). This service is commonly used. Otherwise, simply
remove domain from the above entries and set the
following option to no:
FW_SERVICE_DNS="yes"
The option FW_REDIRECT is very important, as it is used
for the actual redirection of HTTP traffic to a specific port. The
configuration file explains the syntax in a comment above the option:
# Format: # list of <source network>[,<destination network>,<protocol>[,dport[:lport]] # Where protocol is either tcp or udp. dport is the original # destination port and lport the port on the local machine to # redirect the traffic to # # An exclamation mark in front of source or destination network # means everything EXCEPT the specified network
That is:
Specify the IP address and the netmask of the internal networks accessing the proxy firewall.
Specify the IP address and the netmask to which these clients send their
requests. In the case of Web browsers, specify the networks
0/0, a wild card that means “to
everywhere.”
Specify the original port to which these requests are sent.
Specify the port to which all these requests are redirected. In the
example below, only Web services (port 80) are
redirected to the proxy port (port 3128). If there
are more networks or services to add, separate them with a space in the
respective entry.
Because Squid supports protocols other than HTTP, you can also redirect requests from other ports to the proxy. For example, you can also redirect port 21 (FTP) and port 443 (HTTPS or SSL).
Therefore, for a Squid configuration, you could use:
FW_REDIRECT="192.168.0.0/16,0/0,tcp,80,3128"
In the configuration file
/etc/sysconfig/SuSEfirewall2, make sure that
the entry START_FW is set to "yes".
Restart SuSEfirewall2:
tux >sudosystemctl restart SuSEfirewall2
To verify that everything is working properly, check the Squid log files
in /var/log/squid/access.log. To verify that all
ports are correctly configured, perform a port scan on the machine from
any computer outside your network. Only the Web services (port 80) should
be open. To scan the ports with nmap, use:
nmap-O IP_ADDRESS
Start the YaST Squid module:
In the tab, enable . Click to select the interfaces on which to open the port. This option is available only if the Firewall is enabled.
In the tab, select the first line
with the port 3128.
Click the button. A small window appear where you can edit the current HTTP port. Select .
Finish with .
Configure the Firewall settings as described in Step 3 of Procedure 26.1, “Squid as a Transparent Proxy (Command Line)”.
cachemgr.cgi) #
The Squid cache manager CGI interface (cachemgr.cgi) is
a CGI utility for displaying statistics about the memory usage of a running
Squid process. It is also a convenient way to manage the cache and view
statistics without logging the server.
cachemgr.cgi #Make sure the Apache Web server is running on your system. Configure Apache as described in Chapter 24, The Apache HTTP Server. In particular, see Section 24.5, “Enabling CGI Scripts”. To check whether Apache is already running, use:
tux >sudosystemctl status apache2
If inactive is shown, you can start Apache with the
openSUSE Leap default settings:
tux >sudosystemctl start apache2
Now enable cachemgr.cgi in
Apache. To do so, create a configuration file for a
ScriptAlias.
Create the file in the directory /etc/apache2/conf.d
and name it cachemgr.conf. In it, add the following:
ScriptAlias /squid/cgi-bin/ /usr/lib64/squid/ <Directory "/usr/lib64/squid/"> Options +ExecCGI AddHandler cgi-script .cgi Require host HOST_NAME </Directory>
Replace HOST_NAME with the host name of the
computer you want to access
cachemgr.cgi from. This allows
only your computer to access
cachemgr.cgi. To allow access
from anywhere, use Require all granted instead.
If Squid and your Apache Web server run on the same computer, there
should be no changes that need to be made to
/etc/squid/squid.conf. However, verify that
/etc/squid/squid.conf contains the following lines:
http_access allow manager localhost http_access deny manager
These lines allow you to access the manager interface from your own
computer (localhost) but not from elsewhere.
If Squid and your Apache Web server run on different computers, you need to add extra rules to allow access from the CGI script to Squid. Define an ACL for your server (replace WEB_SERVER_IP with the IP address of your Web server):
acl webserver src WEB_SERVER_IP/255.255.255.255
Make sure the following rules are in the configuration file. Compared to the default configuration, only the rule in the middle is new. However, the sequence is important.
http_access allow manager localhost http_access allow manager webserver http_access deny manager
(Optional)
Optionally, you can configure one or more passwords for
cachemgr.cgi. This also allows
access to more actions such as closing the cache remotely or viewing more
information about the cache. For this, configure the options
cache_mgr and cachemgr_passwd with one
or more password for the manager and a list of allowed actions.
For example, to explicitly enable viewing the index page, the menu,
60-minute average of counters without authentication, to enable toggling
offline mode using the password secretpassword, and to
completely disable everything else, use the following configuration:
cache_mgr user cachemgr_passwd none index menu 60min cachemgr_passwd secretpassword offline_toggle cachemgr_passwd disable all
cache_mgr defines a user name. cache_mgr
defines which actions are allowed using which password.
The keywords none and disable are
special: none removes the need for a password,
disable disables functionality outright.
The full list of actions can be best seen after logging in to
cachemgr.cgi. To find out how
the operation needs to be referenced in the configuration file, see the
string after &operation= in the URL of the action
page. all is a special keyword meaning all actions.
Reload Squid and Apache after the configuration file changes:
tux >sudosystemctl reload squid
To view the statistics, go to the
cachemgr.cgi page that you set
up before. For example, it could be
http://webserver.example.org/squid/cgi-bin/cachemgr.cgi.
Choose the right server, and, if set, specify user name and password. Then click and browse through the different statistics.
Calamaris is a Perl script used to generate reports of cache activity in
ASCII or HTML format. It works with native Squid access log files. The
Calamaris home page is located at
http://cord.de/calamaris-english. This tool does not
belong to the openSUSE Leap default installation scope—to use it,
install the calamaris package.
Log in as root, then enter:
cat access1.log [access2.log access3.log] | calamaris OPTIONS > reportfile
When using more than one log file, make sure they are chronologically
ordered, with older files listed first. This can be achieved by either
listing the files one after the other as in the example above, or by using
access{1..3}.log.
calamaris takes the following options:
-a
output all available reports
-w
output as HTML report
-l
include a message or logo in report header
More information about the various options can be found in the program's
manual page with man calamaris.
A typical example is:
cat access.log.{10..1} access.log | calamaris -a -w \
> /usr/local/httpd/htdocs/Squid/squidreport.htmlThis puts the report in the directory of the Web server. Apache is required to view the reports.
Visit the home page of Squid at http://www.squid-cache.org/. Here, find the “Squid User Guide” and a very extensive collection of FAQs on Squid.
In addition, mailing lists are available for Squid at http://www.squid-cache.org/Support/mailing-lists.html.
Mobile computing is mostly associated with laptops, PDAs and cellular phones (and the data exchange between them). Mobile hardware components, such as external hard disks, flash disks, or digital cameras, can be connected to laptops or desktop systems. A number of software components are involved in mobile computing scenarios and some applications are tailor-made for mobile use.
NetworkManager is the ideal solution for laptops and other portable computers. It supports state-of-the-art encryption types and standards for network connections, including connections to 802.1X protected networks. 802.1X is the “IEEE Standard for Local and Metropolitan Area Networks—Port-Based Net…
Power management is especially important on laptop computers, but is also useful on other systems. ACPI (Advanced Configuration and Power Interface) is available on all modern computers (laptops, desktops, and servers). Power management technologies require suitable hardware and BIOS routines. Most …
Mobile computing is mostly associated with laptops, PDAs and cellular phones (and the data exchange between them). Mobile hardware components, such as external hard disks, flash disks, or digital cameras, can be connected to laptops or desktop systems. A number of software components are involved in mobile computing scenarios and some applications are tailor-made for mobile use.
The hardware of laptops differs from that of a normal desktop system. This is because criteria like exchangeability, space requirements and power consumption must be taken into account. The manufacturers of mobile hardware have developed standard interfaces like Mini PCI and Mini PCIe that can be used to extend the hardware of laptops. The standards cover memory cards, network interface cards, and external hard disks.
The inclusion of energy-optimized system components during laptop manufacturing contributes to their suitability for use without access to the electrical power grid. Their contribution to conservation of power is at least as important as that of the operating system. openSUSE® Leap supports various methods that control the power consumption of a laptop and have varying effects on the operating time under battery power. The following list is in descending order of contribution to power conservation:
Throttling the CPU speed.
Switching off the display illumination during pauses.
Manually adjusting the display illumination.
Disconnecting unused, hotplug-enabled accessories (USB CD-ROM, external mouse, Wi-Fi, etc.).
Spinning down the hard disk when idling.
Detailed background information about power management in openSUSE Leap is provided in Chapter 29, Power Management.
Your system needs to adapt to changing operating environments when used for mobile computing. Many services depend on the environment and the underlying clients must be reconfigured. openSUSE Leap handles this task for you.
The services affected in the case of a laptop commuting back and forth between a small home network and an office network are:
This includes IP address assignment, name resolution, Internet connectivity and connectivity to other networks.
A current database of available printers and an available print server must be present, depending on the network.
As with printing, the list of the corresponding servers must be current.
If your laptop is temporarily connected to a projector or an external monitor, different display configurations must be available.
openSUSE Leap offers several ways of integrating laptops into existing operating environments:
NetworkManager is especially tailored for mobile networking on laptops. It provides a means to easily and automatically switch between network environments or different types of networks such as mobile broadband (such as GPRS, EDGE, or 3G), wireless LAN, and Ethernet. NetworkManager supports WEP and WPA-PSK encryption in wireless LANs. It also supports dial-up connections. The GNOME desktop includes a front-end for NetworkManager. For more information, see Section 28.3, “Configuring Network Connections”.
|
My computer… |
Use NetworkManager |
|---|---|
|
is a laptop |
Yes |
|
is sometimes attached to different networks |
Yes |
|
provides network services (such as DNS or DHCP) |
No |
|
only uses a static IP address |
No |
Use the YaST tools to configure networking whenever NetworkManager should not handle network configuration.
If you travel frequently with your laptop and change different types of
network connections, NetworkManager works fine when all DNS addresses are
assigned correctly assigned with DHCP. If some connections use static
DNS address(es), add it to the
NETCONFIG_DNS_STATIC_SERVERS option in
/etc/sysconfig/network/config.
The service location protocol (SLP) simplifies the connection of a laptop to an existing network. Without SLP, the administrator of a laptop usually requires detailed knowledge of the services available in a network. SLP broadcasts the availability of a certain type of service to all clients in a local network. Applications that support SLP can process the information dispatched by SLP and be configured automatically. SLP can also be used to install a system, minimizing the effort of searching for a suitable installation source. Find detailed information about SLP in Chapter 17, SLP.
There are various task areas in mobile use that are covered by dedicated software: system monitoring (especially the battery charge), data synchronization, and wireless communication with peripherals and the Internet. The following sections cover the most important applications that openSUSE Leap provides for each task.
Two system monitoring tools are provided by openSUSE Leap:
is an application that lets you adjust the energy saving related behavior of the GNOME desktop. You can typically access it via › › › .
The gathers measurable system parameters into one monitoring environment. It presents the output information in three tabs by default. gives detailed information about currently running processes, such as CPU load, memory usage, or process ID number and priority. The presentation and filtering of the collected data can be customized—to add a new type of process information, left-click the process table header and choose which column to hide or add to the view. It is also possible to monitor different system parameters in various data pages or collect the data of various machines in parallel over the network. The tab shows graphs of CPU, memory and network history and the tab lists all partitions and their usage.
When switching between working on a mobile machine disconnected from the network and working at a networked workstation in an office, it is necessary to keep processed data synchronized across all instances. This could include e-mail folders, directories and individual files that need to be present for work on the road and at the office. The solution in both cases is as follows:
Use an IMAP account for storing your e-mails in the office
network. Then access the e-mails from the workstation using any
disconnected IMAP-enabled e-mail client, like Mozilla Thunderbird or
Evolution as described in GNOME User Guide. The e-mail
client must be configured so that the same folder is always accessed
for Sent messages. This ensures that all messages
are available along with their status information after the
synchronization process has completed. Use an SMTP server implemented
in the mail client for sending messages instead of the system-wide MTA
postfix or sendmail to receive reliable feedback about unsent mail.
There are several utilities suitable for synchronizing
data between a laptop and a workstation. One of the most widely used is
a command-line tool called rsync. For more
information, see its manual page (man 1 rsync).
With the largest range of these wireless technologies, Wi-Fi is the only one suitable for the operation of large and sometimes even spatially separate networks. Single machines can connect with each other to form an independent wireless network or access the Internet. Devices called access points act as base stations for Wi-Fi-enabled devices and act as intermediaries for access to the Internet. A mobile user can switch among access points depending on location and which access point is offering the best connection. Like in cellular telephony, a large network is available to Wi-Fi users without binding them to a specific location for accessing it.
Wi-Fi cards communicate using the 802.11 standard, prepared by the IEEE organization. Originally, this standard provided for a maximum transmission rate of 2 Mbit/s. Meanwhile, several supplements have been added to increase the data rate. These supplements define details such as the modulation, transmission output, and transmission rates (see Table 27.2, “Overview of Various Wi-Fi Standards”). Additionally, many companies implement hardware with proprietary or draft features.
|
Name (802.11) |
Frequency (GHz) |
Maximum Transmission Rate (Mbit/s) |
Note |
|---|---|---|---|
|
a |
5 |
54 |
Less interference-prone |
|
b |
2.4 |
11 |
Less common |
|
g |
2.4 |
54 |
Widespread, backward-compatible with 11b |
|
n |
2.4 and/or 5 |
300 |
Common |
|
ac |
5 |
up to ~865 |
Expected to be common in 2015 |
|
ad |
60 |
up to appr. 7000 |
Released 2012, currently less common; not supported in openSUSE Leap |
802.11 Legacy cards are not supported by openSUSE® Leap. Most cards using 802.11 a/b/g/n are supported. New cards usually comply with the 802.11n standard, but cards using 802.11g are still available.
In wireless networking, various techniques and configurations are used to ensure fast, high-quality, and secure connections. Usually your Wi-Fi card operates in managed mode. However, different operating types need different setups. Wireless networks can be classified into four network modes:
Managed networks have a managing element: the access point. In this mode (also called infrastructure or default mode), all connections of the Wi-Fi stations in the network run through the access point, which may also serve as a connection to an Ethernet. To make sure only authorized stations can connect, various authentication mechanisms (WPA, etc.) are used. This is also the main mode that consumes the least amount of energy.
Ad-hoc networks do not have an access point. The stations communicate directly with each other, therefore an ad-hoc network is usually slower than a managed network. However, the transmission range and number of participating stations are greatly limited in ad-hoc networks. They also do not support WPA authentication. Additionally, not all cards support ad-hoc mode reliably.
In master mode, your Wi-Fi card is used as the access point, assuming your card supports this mode. Find out the details of your Wi-Fi card at http://linux-wless.passys.nl.
Wireless mesh networks are organized in a mesh topology. A wireless mesh network's connection is spread among all wireless mesh nodes. Each node belonging to this network is connected to other nodes to share the connection, possibly over a large area.
Because a wireless network is much easier to intercept and compromise than a wired network, the various standards include authentication and encryption methods.
Old Wi-Fi cards support only WEP (Wired Equivalent Privacy). However, because WEP has proven to be insecure, the Wi-Fi industry has defined an extension called WPA, which is supposed to eliminate the weaknesses of WEP. WPA, sometimes synonymous with WPA2, should be the default authentication method.
Usually the user cannot choose the authentication method. For example, when a card operates in managed mode the authentication is set by the access point. NetworkManager shows the authentication method.
There are various encryption methods to ensure that no unauthorized person can read the data packets that are exchanged in a wireless network or gain access to the network:
This standard uses the RC4 encryption algorithm, originally with a key length of 40 bits, later also with 104 bits. Often, the length is declared as 64 bits or 128 bits, depending on whether the 24 bits of the initialization vector are included. However, this standard has some weaknesses. Attacks against the keys generated by this system may be successful. Nevertheless, it is better to use WEP than not to encrypt the network.
Some vendors have implemented the non-standard “Dynamic WEP”. It works exactly as WEP and shares the same weaknesses, except that the key is periodically changed by a key management service.
This key management protocol defined in the WPA standard uses the same encryption algorithm as WEP, but eliminates its weakness. Because a new key is generated for every data packet, attacks against these keys are fruitless. TKIP is used together with WPA-PSK.
CCMP describes the key management. Usually, it is used in connection with WPA-EAP, but it can also be used with WPA-PSK. The encryption takes place according to AES and is stronger than the RC4 encryption of the WEP standard.
Bluetooth has the broadest application spectrum of all wireless technologies. It can be used for communication between computers (laptops) and PDAs or cellular phones, as can IrDA. It can also be used to connect various computers within range. Bluetooth is also used to connect wireless system components, like a keyboard or a mouse. The range of this technology is, however, not sufficient to connect remote systems to a network. Wi-Fi is the technology of choice for communicating through physical obstacles like walls.
IrDA is the wireless technology with the shortest range. Both communication parties must be within viewing distance of each other. Obstacles like walls cannot be overcome. One possible application of IrDA is the transmission of a file from a laptop to a cellular phone. The short path from the laptop to the cellular phone is then covered using IrDA. Long-range transmission of the file to the recipient is handled by the mobile network. Another application of IrDA is the wireless transmission of printing jobs in the office.
Ideally, you protect data on your laptop against unauthorized access in multiple ways. Possible security measures can be taken in the following areas:
Always physically secure your system against theft whenever possible. Various securing tools (like chains) are available in retail stores.
Use biometric authentication in addition to standard authentication via login and password. openSUSE Leap supports fingerprint authentication.
Important data should not only be encrypted during transmission, but also on the hard disk. This ensures its safety in case of theft. The creation of an encrypted partition with openSUSE Leap is described in Chapter 11, Encrypting Partitions and Files. Another possibility is to create encrypted home directories when adding the user with YaST.
Encrypted partitions are not unmounted during a suspend to disk event. Thus, all data on these partitions is available to any party who manages to steal the hardware and issue a resume of the hard disk.
Any transfer of data should be secured, no matter how the transfer is done. Find general security issues regarding Linux and networks in Chapter 1, Security and Confidentiality.
openSUSE Leap supports the automatic detection of mobile storage devices over FireWire (IEEE 1394) or USB. The term mobile storage device applies to any kind of FireWire or USB hard disk, flash disk, or digital camera. These devices are automatically detected and configured when they are connected with the system over the corresponding interface. The file manager of GNOME offers flexible handling of mobile hardware items. To unmount any of these media safely, use the (GNOME) feature of the file manager. For more details refer to GNOME User Guide.
When an external hard disk is correctly recognized by the system, its
icon appears in the file manager. Clicking the icon displays the contents
of the drive. It is possible to create directories and files here and
edit or delete them. To rename a hard disk, select the corresponding menu
item from the right-click contextual menu. This name change is limited to display in
the file manager. The descriptor by which the device is mounted in
/media remains unaffected.
These devices are handled by the system like external hard disks. It is similarly possible to rename the entries in the file manager.
Digital cameras recognized by the system also appear as external drives in the overview of the file manager. The images can then be processed using Shotwell. For advanced photo processing use The GIMP. For a short introduction to The GIMP, see Chapter 18, GIMP: Manipulating Graphics.
A desktop system or a laptop can communicate with a cellular phone via Bluetooth or IrDA. Some models support both protocols and some only one of the two. The usage areas for the two protocols and the corresponding extended documentation has already been mentioned in Section 27.1.3.3, “Wireless Communication: Wi-Fi”. The configuration of these protocols on the cellular phones themselves is described in their manuals.
The central point of reference for all questions regarding mobile devices and Linux is http://tuxmobil.org/. Various sections of that Web site deal with the hardware and software aspects of laptops, PDAs, cellular phones and other mobile hardware.
A similar approach to that of http://tuxmobil.org/ is made by http://www.linux-on-laptops.com/. Information about laptops and handhelds can be found here.
SUSE maintains a mailing list in German dedicated to the subject of laptops. See http://lists.opensuse.org/opensuse-mobile-de/. On this list, users and developers discuss all aspects of mobile computing with openSUSE Leap. Postings in English are answered, but the majority of the archived information is only available in German. Use http://lists.opensuse.org/opensuse-mobile/ for English postings.
NetworkManager is the ideal solution for laptops and other portable computers. It supports state-of-the-art encryption types and standards for network connections, including connections to 802.1X protected networks. 802.1X is the “IEEE Standard for Local and Metropolitan Area Networks—Port-Based Network Access Control”. With NetworkManager, you need not worry about configuring network interfaces and switching between wired or wireless networks when you are moving. NetworkManager can automatically connect to known wireless networks or manage several network connections in parallel—the fastest connection is then used as default. Furthermore, you can manually switch between available networks and manage your network connection using an applet in the system tray.
Instead of only one connection being active, multiple connections may be active at once. This enables you to unplug your laptop from an Ethernet and remain connected via a wireless connection.
NetworkManager provides a sophisticated and intuitive user interface, which enables users to easily switch their network environment. However, NetworkManager is not a suitable solution in the following cases:
Your computer provides network services for other computers in your network, for example, it is a DHCP or DNS server.
Your computer is a Xen server or your system is a virtual system inside Xen.
On laptop computers, NetworkManager is enabled by default. However, it can be at any time enabled or disabled in the YaST Network Settings module.
Run YaST and go to › .
The dialog opens. Go to the tab.
To configure and manage your network connections with NetworkManager:
In the field, select .
Click and close YaST.
Configure your network connections with NetworkManager as described in Section 28.3, “Configuring Network Connections”.
To deactivate NetworkManager and control the network with your own configuration
In the field, choose .
Click .
Set up your network card with YaST using automatic configuration via DHCP or a static IP address.
Find a detailed description of the network configuration with YaST in Section 13.4, “Configuring a Network Connection with YaST”.
After having enabled NetworkManager in YaST, configure your network connections with the NetworkManager front-end available in GNOME. It shows tabs for all types of network connections, such as wired, wireless, mobile broadband, DSL, and VPN connections.
To open the network configuration dialog in GNOME, open the settings menu via the status menu and click the entry.
Depending on your system setup, you may not be allowed to configure
connections. In a secured environment, some options may be locked or
require root permission. Ask your system administrator for details.
Open the NetworkManager configuration dialog.
To add a Connection:
Click the icon in the lower left corner.
Select your preferred connection type and follow the instructions.
When you are finished click .
After having confirmed your changes, the newly configured network connection appears in the list of available networks you get by opening the Status Menu.
To edit a connection:
Select the entry to edit.
Click the gear icon to open the dialog.
Insert your changes and click to save them.
To Make your connection available as system connection go to the tab and set the check box . For more information about User and System Connections, see Section 28.4.1, “User and System Connections”.
If your computer is connected to a wired network, use the NetworkManager applet to manage the connection.
Open the Status Menu and click to change the connection details or to switch it off.
To change the settings click and then click the gear icon.
To switch off all network connections, activate the setting.
Visible wireless networks are listed in the GNOME NetworkManager applet menu under . The signal strength of each network is also shown in the menu. Encrypted wireless networks are marked with a shield icon.
To connect to a visible wireless network, open the Status Menu and click .
Click to enable it.
Click , select your Wi-Fi Network and click .
If the network is encrypted, a configuration dialog opens. It shows the type of encryption the network uses and text boxes for entering the login credentials.
To connect to a network that does not broadcast its service set identifier (SSID or ESSID) and therefore cannot be detected automatically, open the Status Menu and click .
Click to open the detailed settings menu.
Make sure your Wi-Fi is enabled and click .
In the dialog that opens, enter the SSID or ESSID in and set encryption parameters if necessary.
A wireless network that has been chosen explicitly will remain connected as long as possible. If a network cable is plugged in during that time, any connections that have been set to will be connected, while the wireless connection remains up.
If your Wi-Fi/Bluetooth card supports access point mode, you can use NetworkManager for the configuration.
Open the Status Menu and click .
Click to open the detailed settings menu.
Click and follow the instructions.
Use the credentials shown in the resulting dialog to connect to the hotspot from a remote machine.
NetworkManager supports several Virtual Private Network (VPN) technologies. For each technology, openSUSE Leap comes with a base package providing the generic support for NetworkManager. In addition to that, you also need to install the respective desktop-specific package for your applet.
To use this VPN technology, install:
NetworkManager-openvpn
NetworkManager-openvpn-gnome
To use this VPN technology, install:
NetworkManager-openconnect
NetworkManager-openconnect-gnome
To use this VPN technology, install:
NetworkManager-pptp
NetworkManager-pptp-gnome
The following procedure describes how to set up your computer as an OpenVPN client using NetworkManager. Setting up other types of VPNs works analogously.
Before you begin, make sure that the package
NetworkManager-openvpn-gnome is
installed and all dependencies have been resolved.
Open the application by clicking the status icons at the right end of the panel and clicking the icon. In the window , choose .
Click the icon.
Select and then .
Choose the type. Depending on the setup of your OpenVPN server, choose or .
Insert the necessary values into the respective text boxes. For our example configuration, these are:
|
|
The remote endpoint of the VPN server |
|
|
The user (only available when you have selected ) |
|
|
The password for the user (only available when you have selected ) |
|
|
|
|
|
|
|
|
|
Finish the configuration with .
To enable the connection, in the panel of the application click the switch button. Alternatively, click the status icons at the right end of the panel, click the name of your VPN and then .
NetworkManager distinguishes two types of wireless connections, trusted and untrusted. A trusted connection is any network that you explicitly selected in the past. All others are untrusted. Trusted connections are identified by the name and MAC address of the access point. Using the MAC address ensures that you cannot use a different access point with the name of your trusted connection.
NetworkManager periodically scans for available wireless networks. If multiple trusted networks are found, the most recently used is automatically selected. NetworkManager waits for your selection in case that all networks are untrusted.
If the encryption setting changes but the name and MAC address remain the same, NetworkManager attempts to connect, but first you are asked to confirm the new encryption settings and provide any updates, such as a new key.
If you switch from using a wireless connection to offline mode, NetworkManager blanks the SSID or ESSID. This ensures that the card is disconnected.
NetworkManager knows two types of connections: user and
system connections. User connections are connections
that become available to NetworkManager when the first user logs in. Any required
credentials are asked from the user and when the user logs out, the
connections are disconnected and removed from NetworkManager. Connections that are
defined as system connection can be shared by all users and are made
available right after NetworkManager is started—before any users log in. In
case of system connections, all credentials must be provided at the time
the connection is created. Such system connections can be used to
automatically connect to networks that require authorization. For
information how to configure user or system connections with NetworkManager, refer to
Section 28.3, “Configuring Network Connections”.
If you do not want to re-enter your credentials each time you want to connect to an encrypted network, you can use the GNOME Keyring Manager to store your credentials encrypted on the disk, secured by a master password.
NetworkManager can also retrieve its certificates for secure connections (for example, encrypted wired, wireless or VPN connections) from the certificate store. For more information, refer to Chapter 12, Certificate Store.
In the following, find some frequently asked questions about configuring special network options with NetworkManager.
By default, connections in NetworkManager are device type-specific: they apply to all physical devices with the same type. If more than one physical device per connection type is available (for example, your machine is equipped with two Ethernet cards), you can tie a connection to a certain device.
To do this in GNOME, first look up the MAC address of your device (use
the available from the applet,
or use the output of command line tools like nm-tool
or wicked show all). Then start the dialog for
configuring network connections and choose the connection you want to
modify. On the or
tab, enter the of the device and confirm
your changes.
When multiple access points with different wireless bands (a/b/g/n) are available, the access point with the strongest signal is automatically chosen by default. To override this, use the field when configuring wireless connections.
The Basic Service Set Identifier (BSSID) uniquely identifies each Basic Service Set. In an infrastructure Basic Service Set, the BSSID is the MAC address of the wireless access point. In an independent (ad-hoc) Basic Service Set, the BSSID is a locally administered MAC address generated from a 46-bit random number.
Start the dialog for configuring network connections as described in Section 28.3, “Configuring Network Connections”. Choose the wireless connection you want to modify and click . On the tab, enter the BSSID.
The primary device (the device which is connected to the Internet) does not need any special configuration. However, you need to configure the device that is connected to the local hub or machine as follows:
Start the dialog for configuring network connections as described in Section 28.3, “Configuring Network Connections”. Choose the connection you want to modify and click . Switch to the tab and from the drop-down box, activate . That will enable IP traffic forwarding and run a DHCP server on the device. Confirm your changes in NetworkManager.
As the DCHP server uses port 67, make sure that it
is not blocked by the firewall: On the machine sharing the connections,
start YaST and select › . Switch to
the category. If is not already shown as , select from
and click .
Confirm your changes in YaST.
In case a DHCP server provides invalid DNS information (and/or routes), you can override it. Start the dialog for configuring network connections as described in Section 28.3, “Configuring Network Connections”. Choose the connection you want to modify and click . Switch to the tab, and from the drop-down box, activate . Enter the DNS information in the and fields. To click and activate the respective check box. Confirm your changes.
Define a system connection that can be used for such
purposes. For more information, refer to
Section 28.4.1, “User and System Connections”.
Connection problems can occur. Some common problems related to NetworkManager include the applet not starting or a missing VPN option. Methods for resolving and preventing these problems depend on the tool used.
The applets starts automatically if the network is set up for NetworkManager control. If the applet does not start, check if NetworkManager is enabled in YaST as described in Section 28.2, “Enabling or Disabling NetworkManager”. Then make sure that the NetworkManager-gnome package is also installed.
If the desktop applet is installed but is not running for some reason,
start it manually. If the desktop applet is installed but is not running
for some reason, start it manually with the command
nm-applet.
Support for NetworkManager, applets, and VPN for NetworkManager is distributed in separate packages. If your NetworkManager applet does not include the VPN option, check if the packages with NetworkManager support for your VPN technology are installed. For more information, see Section 28.3.4, “NetworkManager and VPN”.
If you have configured your network connection correctly and all other
components for the network connection (router, etc.) are also up and
running, it sometimes helps to restart the network interfaces on your
computer. To do so, log in to a command line as root and run
systemctl restart wickeds.
More information about NetworkManager can be found on the following Web sites and directories:
Also check out the information in the following directories for the latest information about NetworkManager and the GNOME applet:
/usr/share/doc/packages/NetworkManager/,
/usr/share/doc/packages/NetworkManager-gnome/.
Power management is especially important on laptop computers, but is also useful on other systems. ACPI (Advanced Configuration and Power Interface) is available on all modern computers (laptops, desktops, and servers). Power management technologies require suitable hardware and BIOS routines. Most laptops and many modern desktops and servers meet these requirements. It is also possible to control CPU frequency scaling to save power or decrease noise.
Power saving functions are not only significant for the mobile use of laptops, but also for desktop systems. The main functions and their use in ACPI are:
not supported.
This mode writes the entire system state to the RAM. Subsequently, the
entire system except the RAM is put to sleep. In this state, the computer
consumes very little power. The advantage of this state is the
possibility of resuming work at the same point within a few seconds
without having to boot and restart applications. This function
corresponds to the ACPI state S3.
In this operating mode, the entire system state is written to the hard
disk and the system is powered off. There must be a swap partition at
least as big as the RAM to write all the active data. Reactivation from
this state takes about 30 to 90 seconds. The state prior to the suspend
is restored. Some manufacturers offer useful hybrid variants of this
mode, such as RediSafe in IBM Thinkpads. The corresponding ACPI state is
S4. In Linux, suspend to disk is performed by kernel
routines that are independent from ACPI.
mkswapDo not reformat existing swap partitions with mkswap
if possible. Reformatting with mkswap will change
the UUID value of the swap partition. Either reformat via YaST (will
update /etc/fstab) or adjust
/etc/fstab manually.
ACPI checks the battery charge status and provides information about it. Additionally, it coordinates actions to perform when a critical charge status is reached.
Following a shutdown, the computer is powered off. This is especially important when an automatic shutdown is performed shortly before the battery is empty.
In connection with the CPU, energy can be saved in three different ways: frequency and voltage scaling (also known as PowerNow! or Speedstep), throttling and putting the processor to sleep (C-states). Depending on the operating mode of the computer, these methods can also be combined.
ACPI was designed to enable the operating system to set up and control the individual hardware components. ACPI supersedes both Power Management Plug and Play (PnP) and Advanced Power Management (APM). It delivers information about the battery, AC adapter, temperature, fan and system events, like “close lid” or “battery low.”
The BIOS provides tables containing information about the individual
components and hardware access methods. The operating system uses this
information for tasks like assigning interrupts or activating and
deactivating components. Because the operating system executes commands
stored in the BIOS, the functionality depends on the BIOS implementation.
The tables ACPI can detect and load are reported in journald. See
Chapter 11, journalctl: Query the systemd Journal for more information on viewing the journal
log messages. See Section 29.2.2, “Troubleshooting” for more information
about troubleshooting ACPI problems.
The CPU can save energy in three ways:
Frequency and Voltage Scaling
Throttling the Clock Frequency (T-states)
Putting the Processor to Sleep (C-states)
Depending on the operating mode of the computer, these methods can be combined. Saving energy also means that the system heats up less and the fans are activated less frequently.
Frequency scaling and throttling are only relevant if the processor is busy, because the most economic C-state is applied anyway when the processor is idle. If the CPU is busy, frequency scaling is the recommended power saving method. Often the processor only works with a partial load. In this case, it can be run with a lower frequency. Usually, dynamic frequency scaling controlled by the kernel on-demand governor is the best approach.
Throttling should be used as the last resort, for example, to extend the battery operation time despite a high system load. However, some systems do not run smoothly when they are throttled too much. Moreover, CPU throttling does not make sense if the CPU has little to do.
For in-depth information, refer to Chapter 11, Power Management.
There are two different types of problems. On one hand, the ACPI code of the kernel may contain bugs that were not detected in time. In this case, a solution will be made available for download. More often, the problems are caused by the BIOS. Sometimes, deviations from the ACPI specification are purposely integrated in the BIOS to circumvent errors in the ACPI implementation of other widespread operating systems. Hardware components that have serious errors in the ACPI implementation are recorded in a blacklist that prevents the Linux kernel from using ACPI for these components.
The first thing to do when problems are encountered is to update the BIOS. If the computer does not boot, one of the following boot parameters may be helpful:
Do not use ACPI for configuring the PCI devices.
Only perform a simple resource configuration. Do not use ACPI for other purposes.
Disable ACPI.
Some newer machines (especially SMP systems and AMD64 systems) need ACPI for configuring the hardware correctly. On these machines, disabling ACPI can cause problems.
Sometimes, the machine is confused by hardware that is attached over USB or FireWire. If a machine refuses to boot, unplug all unneeded hardware and try again.
Monitor the boot messages of the system with the command dmesg
-T | grep -2i acpi (or all messages, because the
problem may not be caused by ACPI) after booting. If an error occurs while
parsing an ACPI table, the most important table—the DSDT
(Differentiated System Description Table)—can be
replaced with an improved version. In this case, the faulty DSDT of the
BIOS is ignored. The procedure is described in
Section 29.4, “Troubleshooting”.
In the kernel configuration, there is a switch for activating ACPI debug messages. If a kernel with ACPI debugging is compiled and installed, detailed information is issued.
If you experience BIOS or hardware problems, it is always advisable to contact the manufacturers. Especially if they do not always provide assistance for Linux, they should be confronted with the problems. Manufacturers will only take the issue seriously if they realize that an adequate number of their customers use Linux.
http://tldp.org/HOWTO/ACPI-HOWTO/ (detailed ACPI HOWTO, contains DSDT patches)
http://www.acpi.info (Advanced Configuration & Power Interface Specification)
http://acpi.sourceforge.net/dsdt/index.php (DSDT patches by Bruno Ducrot)
In Linux, the hard disk can be put to sleep entirely if it is not needed or
it can be run in a more economic or quieter mode. On modern laptops, you do
not need to switch off the hard disks manually, because they automatically
enter an economic operating mode whenever they are not needed. However, if
you want to maximize power savings, test some of the following methods,
using the hdparm command.
It can be used to modify various hard disk settings. The option
-y instantly switches the hard disk to the standby mode.
-Y puts it to sleep. hdparm
-S X causes the hard disk to be
spun down after a certain period of inactivity. Replace
X as follows: 0 disables this
mechanism, causing the hard disk to run continuously. Values from
1 to 240 are multiplied by 5
seconds. Values from 241 to 251
correspond to 1 to 11 times 30 minutes.
Internal power saving options of the hard disk can be controlled with the
option -B. Select a value from 0 to
255 for maximum saving to maximum throughput. The result
depends on the hard disk used and is difficult to assess. To make a hard
disk quieter, use the option -M. Select a value from
128 to 254 for quiet to fast.
Often, it is not so easy to put the hard disk to sleep. In Linux, numerous
processes write to the hard disk, waking it up repeatedly. Therefore, it is
important to understand how Linux handles data that needs to be written to
the hard disk. First, all data is buffered in the RAM. This buffer is
monitored by the pdflush daemon.
When the data reaches a certain age limit or when the buffer is filled to a
certain degree, the buffer content is flushed to the hard disk. The buffer
size is dynamic and depends on the size of the memory and the system load.
By default, pdflush is set to short intervals to achieve maximum data
integrity. It checks the buffer every 5 seconds and writes the data to the
hard disk. The following variables are interesting:
/proc/sys/vm/dirty_writeback_centisecs
Contains the delay until a pdflush thread wakes up (in hundredths of a second).
/proc/sys/vm/dirty_expire_centisecs
Defines after which timeframe a dirty page should be written out latest.
Default is 3000, which means 30 seconds.
/proc/sys/vm/dirty_background_ratio
Maximum percentage of dirty pages until pdflush begins to write them.
Default is 5%.
/proc/sys/vm/dirty_ratio
When the dirty page exceeds this percentage of the total memory, processes are forced to write dirty buffers during their time slice instead of continuing to write.
Changes to the pdflush daemon
settings endanger the data integrity.
Apart from these processes, journaling file systems, like
Btrfs,
Ext3,
Ext4 and others write their
metadata independently from pdflush,
which also prevents the hard disk from spinning down.
To avoid this, a special kernel extension has been
developed for mobile devices. To use the extension, install the
laptop-mode-tools package and
see
/usr/src/linux/Documentation/laptops/laptop-mode.txt
for details.
Another important factor is the way active programs behave. For example, good editors regularly write hidden backups of the currently modified file to the hard disk, causing the disk to wake up. Features like this can be disabled at the expense of data integrity.
In this connection, the mail daemon postfix uses the variable
POSTFIX_LAPTOP. If this variable is set to
yes, postfix accesses the hard disk far less frequently.
In openSUSE Leap these technologies are controlled by
laptop-mode-tools.
All error messages and alerts are logged in the system journal that can be
queried with the command journalctl (see
Chapter 11, journalctl: Query the systemd Journal for more information). The following
sections cover the most common problems.
Refer to the kernel sources to see if your processor is supported. You may
need a special kernel module or module option to activate CPU frequency
control. If the kernel-source
package is installed, this information is available in
/usr/src/linux/Documentation/cpu-freq/*.
http://en.opensuse.org/SDB:Suspend_to_RAM—How to get Suspend to RAM working
http://old-en.opensuse.org/Pm-utils—How to modify the general suspend framework
This example network is used across all network-related chapters of the openSUSE® Leap documentation.
This appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
This manual introduces you to the GNOME graphical desktop environment as implemented in openSUSE® Leap, and shows you how to configure it to meet your personal needs and preferences. It also introduces you to several programs and services. It is intended for users who have experience using a graphical desktop environment such as macOS*, Windows*, or other Linux desktops.
The manual is divided into the following parts:
Get to know your GNOME desktop, learn how to cope with basic and daily tasks using the central GNOME applications and various small utilities. Get an overview of the possibilities that GNOME offers for modifying and individualizing the desktop according to your needs and wishes. Learn how to use assistive technologies to improve accessibility in case of vision or mobility impairment.
Learn how to manage and exchange data on your system or on a network: connecting to a network and sharing files, managing printers, or creating backups of your data. This part also shows how to sign and encrypt your mails and documents and how to use file transfer clients to transfer data from or to the Internet.
Introduces the LibreOffice suite, including Writer, Calc, Impress, Base, Draw, and Math.
Use a Web browser and get to know the e-mailing and calendaring software. Communicate with others using Instant Messaging or Voice over IP.
Get to know GIMP, an image manipulation program that meets the needs of both amateurs and professionals. Get introduced to your desktop's applications for playing movies. Learn how to create data or audio CDs and DVDs for archiving your data.
Documentation for our products is available at http://doc.opensuse.org/, where you can also find the latest updates, and browse or download the documentation in various formats.
In addition, the product documentation
is usually available in your installed system under
/usr/share/doc/manual.
The following documentation is available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Several feedback channels are available:
To report bugs for openSUSE Leap, go to https://bugzilla.opensuse.org/, log in, and click .
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.com. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a concise
description of the problem and refer to the respective section number and
page (or URL).
The following notices and typographical conventions are used in this documentation:
/etc/passwd: directory names and file names
PLACEHOLDER: replace PLACEHOLDER with the actual value
PATH: the environment variable PATH
ls, --help: commands, options, and
parameters
user: users or groups
package name : name of a package
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
Commands that must be run with root privileges. Often you can also
prefix these commands with the sudo command to run them
as non-privileged user.
root #commandtux >sudocommand
Commands that can be run by non-privileged users.
tux >command
Notices
Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.
Important information you should be aware of before proceeding.
Additional information, for example about differences in software versions.
Helpful information, like a guideline or a piece of practical advice.
This section describes the conventions, layout, and common tasks of the GNOME desktop as implemented in your product.
In this chapter you will learn how to work with files and burn CDs. You will also find out how to perform regular tasks with your desktop.
You can change the way the GNOME desktop looks and behaves to suit your own personal tastes and needs. Some possible changes of settings are:
The GNOME desktop includes assistive technologies to support users with various impairments and special needs, and to interact with common assistive devices. This chapter describes several assistive technology applications designed to meet the needs of users with physical disabilities like low vision or impaired motor skills.
This section describes the conventions, layout, and common tasks of the GNOME desktop as implemented in your product.
GNOME is an easy-to-use graphical interface that can be customized to meet your needs and personal preferences. This section describes the default configuration of GNOME. If you or your system administrator modify the defaults, some aspects might be different, such as appearance or key combinations.
openSUSE Leap ships with as three different session configurations based on GNOME. These are GNOME, GNOME Classic, and SLE Classic. The version described here is SLE Classic. The main diffrence between the configurations is the look and feel of the home screen and the main menu. The majority of waht is described in the following applies to all three configurations.
In general, all users must authenticate—unless is enabled for a specific user. In this case, a particular user will be logged in automatically when the system starts. This can save some time, especially if a computer is used by a single person. It may impact account security. Auto Login can be enabled or disabled during installation or at any time using the YaST User and Group Management module. For more information, refer to Chapter 5, Managing Users with YaST.
If your computer is running in a network environment and you are not the only person using the machine, you are usually prompted to enter your user name and password when you start the system.
If your user name is listed, click it.
If your user name is not listed, click . Then enter your user name and click .
Enter your password and click .
If you want to try one of the additional GNOME session configurations or try another desktop environment, follow the steps below.
On the login screen, click your user name or enter it, as you normally would.
To change the session type, click the cog wheel icon. A menu appears.
From the menu, select one of the entries. Depending on your configuration there may be different choices, but the default selection is as follows.
A GNOME 3 configuration that is very close to the upstream design. It focuses on interrupting users as little as possible. However, starting applications and switching between them works differently from many other desktop operating systems. It uses a single panel at the top of the screen.
A GNOME 3 configuration that is designed to appeal to former users of GNOME 2. The desktop has two panels, one at the top and another at the bottom.
A very basic desktop designed to use little resources. It can be used as a fallback, if other options do not work or are slow.
The default desktop of SUSE Linux Enterprise, designed to appeal to users of older versions of SUSE Linux Enterprise and users of Microsoft* Windows*. This desktop is a GNOME 3 configuration and uses a single panel that is placed at the bottom of the screen.
Enter your password into the text box, then click .
After switching to another session type once, the chosen session will become your default session. To switch back, repeat the steps above.
In the top right corner, there are status icons and the assistive technologies menu. By clicking the status icons, open a menu that allows you to set the sound volume and restart or power off the machine.
The GNOME desktop appears after you first log in. It displays a panel at the bottom showing the following elements (from left to right):
Click in the left corner to open a menu with all the installed programs. These are classified under different categories for a better overview. Sub-items open automatically when you place the mouse above them.
Click in the bottom part of the menu to open Activities Overview where you can start programs and manage those already running.
The Activity Overview is described further in Section 1.2.1, “Activities Overview”.
Click to open a menu with shortcuts to your personal directories, connected storage media, and network resources.
All applications currently open on the desktop (on the active workspace) appear in the middle part of the panel. You can bring these applications to the foreground by clicking their names.
When there are notifications, for example, for new chat or e-mail messages or concerning system updates, an indicator will appear. The indicator is a blue circle with the number of available notifications displayed in the middle. Click the indicator to open the Message Tray where you can interact with all the notifications.
This menu lets you select a workspace (also called a virtual desktop) to work on. This feature can help you work with many windows. For example, you could move windows needed for one project to workspace 1 and windows needed for another project to workspace 2.
The current day of the week and time are shown to the right from the workspace switcher. Click it to open a menu where you can access a calendar and adjust date and time settings.
In the right corner of the panel, icons showing the current status of the network connection, sound volume and power/battery status are displayed.
Click the icons to open a menu where you can adjust sound volume, display brightness, network connection, and power settings. Click the name to display the options for logging out or for switching to another user.
The three icons in the lower part of the menu allow you to, from left to right, open the GNOME settings dialog, lock the screen, and power off or restart your computer.
Activities Overview is a full screen mode that comprises all the ways in which you can switch from one activity to another. It shows previews of all open windows and icons for favorite and running applications. It also integrates searching and browsing functionality.
There are multiple ways to open the Activities Overview:
Open the menu on the bottom panel and select .
Press Meta.
Forcefully move the pointer to the top left corner (the so-called hot corner).
In the following, the most important parts of the Activities Overview are explained.
The Dash is the bar positioned on the center left. It contains favorite applications and all applications with open windows. If you move the mouse pointer over one of the icons, GNOME will display the name of the corresponding application nearby. A light glow indicates that the application is running and has at least one open window.
Right-clicking an icon opens a menu which offers different actions depending on the associated program. Using , you can place the application icon permanently in Dash. To remove a program icon from Dash, select . To rearrange an icon, use the mouse to drag it to a new position.
On the top, there is a search box that you can use to find applications, settings and files in your home directory.
To search, you do not need to click the search box. You can begin typing directly after opening Activity Overview. Search starts immediately, you do not need to press Enter.
On the right, there is an overview of available workspaces. To switch to the selected desktop, click the preview of it.
To move a window from one workspace to another, drag a window preview from one workspace preview to another.
To start a program, you have several options:
In the bottom panel, click and select the desired program from the hierarchical menu.
Open the Activities Overview by pressing Meta. Now click an application icon or search for an application. If you do not know the exact application name, you can search for generic category names such as “image editor”.
Further information about the activities overview can be found in Section 1.2.1, “Activities Overview”.
If you know the exact command to start the program, you can press Alt–F2, enter the command into the dialog and press Enter.
Note that the only button displayed in the window is labeled and will indeed close the window.
When you have finished using the computer, there are multiple ways to finish the session. Which one is right in a given situation depends on how long you will be away and whether you are worried about energy consumption, among other things.
Locking the Computer. Pause your session, but keep the computer on. Make sure that nobody can look at or change your work while you are away on a break. Other users can log in and work in the meantime. Other users can shut down the computer, but a prompt will warn them that you are still logged in.
Logging Out. Finish the current session, but leave the computer on, so other users can log in.
Shutting Down. Finish the current session and turn off the computer.
Restarting. Finish the current session and restart the computer. Restarting is necessary to apply some system updates.
Suspending the Computer. Pause your session and put the computer in a state where it consumes a minimal amount of energy. Suspend mode can be configured to lock your screen, so nobody can look at or change your work. Waking up the computer is generally much quicker than a full computer start.
This mode is also known as suspend-to-RAM, sleep or standby mode.
To lock the screen, click the status icons on the right of the main panel and click the padlock icon.
When you lock your screen, at first a curtain with a clock will appear. After some time the screen turns black. To unlock the screen, move the mouse or press a key to display the locked screen dialog. Enter your password, then press Enter to unlock the screen.
Click the status icons on the right of the main panel to open the menu.
Click your user name.
Select one of the following options:
Logs you out of the current session and returns you to the Login screen.
Suspends your session, allowing another user to log in and use the computer.
Takes you to the user settings where you can change your password.
Click the status icons on the right of the main panel to open the menu.
Click the power off icon in the lower right part of the menu.
Select one of the following options:
Logs you out of the current session, then turns off the computer.
Logs you out of the current session, then restarts the computer.
Click the status icons on the right of the main panel to open the menu.
Hold Alt pressed. The power off icon in the lower right part of the menu turns into a pause icon. Click the pause icon.
In this chapter you will learn how to work with files and burn CDs. You will also find out how to perform regular tasks with your desktop.
You can open GNOME Files in multiple ways:
Click › › .
Open the Activities Overview and search for files.
On the desktop, double-click .
Open the menu and select any entry, such as .
The elements of the GNOME Files window include the following:
The toolbar contains back and forward buttons, the path bar, a search function, elements to let you change the layout of the content area, and the application menu.
The menu is the last icon on the toolbar. It lets you perform many tasks, such as opening the preferences dialog, creating a new directory or opening a new window or tab.
The sidebar lets you navigate between often-used directories and external or network storage devices. To display or hide the sidebar, press F9.
Displays files and directories.
Use the icons in the top right part of the window to switch between list and grid icon view.
Open a context menu by right-clicking inside the content area. The items in this menu depend on where you right-click.
For example, if you right-click a file or directory, you can select items related to the file or directory. If you right-click the background of a content area, you can select items related to the display of items in the content area.
The floating statusbar appears when a file is selected. It displays the file name and size.
The following table lists a selection of key combinations of GNOME Files.
|
Key Combination |
Description |
|---|---|
|
Alt–←/ Alt–→ |
Go backward/go forward. |
|
Alt–↑ |
Open the parent directory. |
|
←, →, ↑, ↓ |
Select an item. |
|
Alt–↓ or Enter |
Open an item. |
|
Alt–Enter |
Open an item's dialog. |
|
Shift–Alt–↓ |
Open an item and close the current directory. |
|
Ctrl–L |
Transform the path bar from a button view to a text box. Exit this mode by pressing Enter (go to the location) or Esc (to remain in the current directory). |
|
/ |
Transform the path bar from a button view to a text box and replace
the current path with |
|
Alt–Home |
Open your home directory. |
|
Any number or letter key |
Start a search within the current directories and their subdirectories. The character you pressed is used as the first character of the search term. Search happens as you type, you do not need to press Enter. |
|
Ctrl–T |
Start a search within the current directories and their subdirectories. The character you pressed is used as the first character of the search term. Search happens as you type, you do not need to press Enter. |
|
Del |
Moves the selected file or directory to the trash, from which it can be restored with . |
Sometimes, it is useful to archive or compress files, for example:
You want to attach an entire directory, including its subdirectories, to an e-mail.
You want to attach a large file to an e-mail.
You want to save space on your hard disk and have files you rarely use.
In all these cases, you can create a compressed file, such as a ZIP file, which can contain multiple original files. How much smaller the compressed version is than the original depends on the file type. Many video, image and office document formats are already compressed and will only become marginally smaller.
In the GNOME Files content area, right-click the directory you want to archive, then click .
Accept the default archive file name or provide a new one.
Select a file extension from the drop-down box.
.zip files are supported on most operating
systems, including Windows*.
.tar.gz files are compatible with most Linux* and
Unix* systems.
.7z files usually offer better compression ratios
than other formats, but are not as widely supported.
Specify a location for the archive file, then click .
To extract an archived file, right-click the file, then select . You can also double-click the compressed file to open it and see which files are included.
For more information on compressed files, see Section 2.10, “Creating, Displaying, and Decompressing Archives”.
If your system has a CD or DVD writer, you can use GNOME Files to burn CDs and DVDs. If you want to burn an audio CD or need more control over the result, see Chapter 20, Brasero: Burning CDs and DVDs.
Open GNOME Files.
Insert a blank medium.
Find the files you want to add to the medium and drag them to the sidebar
item called . (The label may read
slightly differently, depending on the type of medium you inserted.) When
your mouse pointer is over the sidebar item, a small +
should appear next to the pointer.
When you have dragged all files onto the sidebar item , click it.
Provide a name next to or keep the proposal.
Click .
In the appearing dialog , make sure the right medium is selected. Then click .
The files are burned to the disc. This can take a few minutes, depending on the amount of data being burned and the speed of your burner.
After the medium has been burned, it will be ejected from the drive. In the window , you can click .
To burn an ISO disc image, first insert a medium, then double-click the ISO file in GNOME Files. In the dialog , click .
Use the bookmarks feature in GNOME Files to quickly jump to your favorite directories from the sidebar.
Switch to the directory for which you want to create a bookmark in the content area.
Click the list icon, then select from the menu.
The bookmark now appears in the sidebar, with the directory name as the bookmark name.
(Optional) If you want, you can change the name of the bookmark. This does not affect the name of the bookmarked directory itself. To change the name, right-click the new sidebar item and select .
(Optional) If you want, you can change the order in which the bookmarks are displayed. To reorder, click a bookmark and drag it to the desired location.
To switch to a bookmarked directory, click the appropriate sidebar item.
You can use GNOME Files to access files on remote servers. For more information, see Chapter 5, Accessing Network Resources.
To access CDs/DVDs or flash disks, insert or attach the medium. An icon for the medium is automatically created on the desktop. For many types of removable media, a GNOME Files window pops up automatically. If GNOME Files does not open, double-click the icon for that drive on the desktop to view the contents. In GNOME Files, you will see an item for the medium in the sidebar.
Do not physically remove flash disks immediately after using them. Even when the system does not indicate that data is being written, the drive may not be finished with a previous operation.
In the sidebar of GNOME Files, click the Eject icon next to the medium to safely remove or unmount the drive.
There are multiple ways to search for files or directories. In all cases, the search will be performed on file and directory names. Searching by file size, modification date and other properties is only partially possible in the preinstalled graphical tools. Such searches are easier to do on the command line.
In GNOME Files, navigate to the directory from which you want to start the search. Then start typing the search term. To search for objects with a certain modification date or file type, click the arrow-down icon of the search box and modify the properties.
Open the Activities Overview by pressing Meta. Then start typing the search term. The search will be performed within your home directory.
Click › › . Enter the search term in the text box . The search will be performed within your home directory.
Copy and paste works the same as in other operating systems. First select the text, so that it appears highlighted, usually in blue. Then press Ctrl–C. Now move the keyboard focus to the right position. Finally, to insert the text, press Ctrl–V.
To copy or paste in the terminal, additionally press Shift together with the above key combinations.
An alternative way of using copy and paste is described in the following. First select the text. To paste the text, middle-click over the position where you want the text to be pasted. As soon as you make another selection, the text from the original selection will be replaced in the clipboard.
When copying information between programs, you must keep the source program open and paste the text before closing it. When a program closes, any content from that application that is on the clipboard is lost.
To surf the Web or send and receive e-mail messages, you must have configured an Internet connection. If you have installed openSUSE Leap on a laptop or a mobile device, NetworkManager is enabled by default. On the GNOME desktop, you can then establish Internet connections with NetworkManager as described in Section 28.3, “Configuring Network Connections”.
Depending on your environment, you can choose in YaST which basic service to use for setting up network connections (either NetworkManager or wicked). For details, see Section 13.4.1.1, “Configuring Global Networking Options”.
The GNOME desktop includes Firefox, a Mozilla*-based Web browser. You can start it by clicking › › .
You can type an address into the location bar at the top or click links in a page to move to different pages, like in any other Web browser.
For more information, see Chapter 14, Firefox: Browsing the Web.
For reading and managing your mail and events, use Evolution. Evolution is a groupware program that makes it easy to store, organize and retrieve your personal information.
Evolution seamlessly combines e-mail, a calendar, an address book, and a memo and task list in one easy-to-use application. With its extensive support for communications and data interchange standards, Evolution can work with existing corporate networks and applications, including Microsoft* Exchange.
To start Evolution, click › › .
The first time you start Evolution, it prompts you with a few questions to set up a mail account and import mail from an old mail client. Then it shows you how many new messages you have and lists upcoming appointments and tasks. The calendar, address book and mail tools are available in the shortcut bar on the left.
For more information, see Chapter 15, Evolution: E-Mailing and Calendaring.
For creating and editing documents, LibreOffice is installed with the GNOME desktop. LibreOffice is a complete set of office tools that can both read and save Microsoft Office file formats. LibreOffice has a word processor, a spreadsheet, a database, a drawing tool and a presentation program.
To start LibreOffice, click › › .
For more information, see Chapter 10, LibreOffice: The Office Suite.
To see the state of the computer battery on your laptop, check the battery icon in the right part of the panel. On certain events, such as a critically low battery state, GNOME will display notifications informing you about the event.
You can open the power settings via › › › .
For more information, see Section 3.3.2, “Configuring Power Settings”.
You can use the Archive Manager application (also known as File Roller) to
create, view, modify or unpack an archive. An archive is a file that acts as
a container for other files. An archive can contain many files, directories
and subdirectories, usually in compressed form. Archive Manager supports
common formats such as zip,
tar.gz, tar.bz2,
lzh, and rar. You can use Archive
Manager to create, open and extract a compressed non-archive file.
To start Archive Manager, click › › .
If you already have a compressed file, double-click the file name in GNOME Files to view the contents of the archive in Archive Manager.
In Archive Manager, click .
Select the archive you want to open.
Click .
Archive Manager displays the following:
The archive name in the titlebar.
The archive contents in the content area.
To open another archive, click again. Archive Manager opens each archive in a new window. To open another archive in the same window, you must first select from the menu in the right part of the window to close the current archive, then click .
If you try to open an archive that was created in a format that Archive Manager does not recognize, the application displays an error message.
To display the archive's properties, click the last icon in the titlebar and select . Details like name, location, type, last modification, number of files, size, and compression ratio are shown.
In Archive Manager, select the files that you want to extract.
Click .
Specify the directory where Archive Manager will extracts the files.
Choose from the following extraction options:
|
Option |
Description |
|---|---|
|
All files |
Extracts all files from the archive. |
|
Selected files |
Extracts the selected files from the archive. |
|
Files |
Extracts from the archive all files that match the specified pattern. |
|
Keep directory structure |
Reconstructs the directory structure when extracting the specified files.
For example, you specify
If you do not select the
option, Archive Manager does not create any subdirectories. Instead,
it extracts all files from the archive, including files from
subdirectories, to |
|
Do not overwrite newer files |
If not active, the Archive Manager overwrites any files in the destination directory that have the same name as the specified files. If you select this option, Archive Manager does not extract the specified file if an existing file with the same name already exists in the destination directory. |
Click .
To extract an archived file in a file manager window without opening Archive Manager, right-click the file and select .
The Extract operation extracts a copy of the specified files from the archive. The extracted files have the same permissions and modification date as the original files that were added to the archive.
The Extract operation does not change the contents of the archive.
In Archive Manager, click the main menu icon in the top left part of the window and select .
Specify the name and location of the new archive.
Select an archive type from the drop-down box.
Click .
Archive Manager creates an empty archive, but does not yet write the archive to disk. Archive Manager writes a new archive to disk only when the archive contains at least one file. If you create a new archive and quit Archive Manager before you add any files to the archive, the archive will be deleted.
Add files and directories to the new archive:
Click and select the files or directories you want to add.
Click .
Archive Manager adds the files to the current directory in the archive.
You can also add files to an archive in a file manager window without opening Archive Manager. See Section 2.1.2, “Compressing Files or Directories” for more information.
You can take a snapshot of your screen or of an individual application window by using the Take Screenshots utility. Start it by pressing Print to take a screenshot of the entire desktop or by pressing Alt–Print to take a screenshot of the currently active window or dialog.
The screenshots are automatically saved to your
~/Pictures directory.
You can also use GIMP to take screenshots. (For more information on GIMP, see Chapter 18, GIMP: Manipulating Graphics). In GIMP, click › › , select an area, choose a delay and then click .
Documents that need to be shared or printed across platforms can be saved as PDF (Portable Document Format) files. Document Viewer (also known as Evince) can open PDF files and many similar file types, such as XPS, DjVu, or TIFF.
In rare cases, documents will not be displayed correctly in Document Viewer. This can happen, for example, with certain forms, animations or 3D images. In such cases, ask the authors of the file what viewer they recommend. However, in some cases the recommended viewer will not work on Linux.
To open Document Viewer, double-click a PDF file in a file manager window. Document Viewer will also open when you download a PDF file from a Web site. To open Document Viewer without a file, select › › .
To view a PDF file in Document Viewer, click the cog wheel icon to open the menu and select . Now locate the desired PDF file and click .
Use the navigation icons at the top of the window or the thumbnails in the left panel to navigate through the document. If your PDF document provides bookmarks, you can access them in the left panel of the viewer.
When you connect to the Internet, the updater applet automatically checks whether software updates for your system are available. When important updates are available, you will receive a notification on your desktop.
For detailed information on how to install software updates with the updater applet and how to configure it, refer to the chapter about installing and removing software in Section 11.4, “Keeping the System Up-to-date”.
Along with the applications described in this chapter for getting started, you can use many other applications on GNOME. Find detailed information about these applications in the other parts of this manual.
To learn more about GNOME and GNOME applications, see http://www.gnome.org.
To report bugs or add feature requests, go to http://bugzilla.gnome.org.
You can change the way the GNOME desktop looks and behaves to suit your own personal tastes and needs. Some possible changes of settings are:
Keyboard and mouse configuration, as described in Section 3.3.3, “Modifying Keyboard Shortcuts” and Section 3.3.4, “Configuring the Mouse and Touchpad”
Desktop background, as described in Section 3.2.1, “Changing the Desktop Background”
Sounds, as described in Section 3.3.7, “Configuring Sound Settings”
These settings and others can be changed in the dialog.
Whereas YaST is a desktop-independent system-wide tool to configure most aspects of your product installation, the settings dialog is a GNOME configuration tool. It focuses on look and feel, personal settings and preferences of your GNOME desktop.
To access the GNOME settings dialog, click › › . The dialog is divided into the following three categories:
From here, you can change the background of your desktop or of the lock screen, and configure language settings. For more information, see Section 3.2, “Personal”.
Allows you to configure hardware components such as monitors, printers, mouses/touchpads, network adapters and sound devices. You can also change key combination settings and set up power-saving features. For more information, see Section 3.3, “Hardware”.
Lets you configure system settings such as date and time, whether to start software when inserting flash disks or whether you want to share your screen with others. You can also set up user accounts. If you want, you can also start YaST from this screen, though it is also available separately from within the menu. For more information, see Section 3.4, “System”.
To change some system-wide settings, the control center will prompt you for
the root password and start YaST. This is mostly
the case for administrator settings (including most of the hardware, the
graphical user interface, Internet access, security settings, user
administration, software installation and system updates and information).
Follow the instructions in YaST to configure these settings. For
information about using YaST, refer to the integrated YaST help texts or
to the
Start-Up.
This chapter focuses on individual settings you can change directly in the GNOME settings dialog, without having to use YaST.
The following sections introduce examples of how to configure some personal aspects of your GNOME desktop, like your languages used or desktop backgrounds.
The desktop background is the image or color that is applied to your desktop. You can also customize the image shown when the screen is locked.
To change the desktop background or the lock screen:
Click › › › .
Click or .
Click , , or .
Wallpapers are preconfigured images distributed with your system.
Pictures are your own images from your Pictures
directory (~/Pictures). Colors are predefined colors
chosen by GNOME developers.
Choose an option from the list.
When you are satisfied with your choice, click .
openSUSE Leap can be configured to use any of several languages. The language setting determines the language of dialogs and menus and can also determine the keyboard and clock layout.
To configure your language settings click › › › .
Here you can choose:
Interface language.
Date and number formats, currency and related options.
Input sources (keyboard layout). For non-alphabetic languages there can be additional settings.
ibus-setup Do Not Take Effect
On GNOME, settings made using ibus-setup do not take
effect. Instead, always use the
application:
To change input methods, use the panel .
To change the key combination that switches between input methods, use the panel . In it, choose the category and the entry .
In the following sections you will find examples of how to configure some hardware aspects of your GNOME desktop, including keyboard or mouse preferences, handling of removable drives (and other media) or screen resolution.
The Bluetooth module lets you set the visibility of your machine over Bluetooth and connect to available Bluetooth devices. To configure Bluetooth connectivity, follow these steps:
Click › › › to open the Bluetooth settings module.
To use Bluetooth, turn the switch on.
To make your computer visible over Bluetooth, turn the switch on. The computer will start searching for other visible Bluetooth devices in the vicinity and display any found devices in the list. At first, the list may be empty.
The switch is meant to be used only temporarily. You only need to turn it on for the initial setup of a connection to a Bluetooth device. After the connection has been established, turn off the switch.
On the device you want to connect, turn on Bluetooth connectivity and visibility, too.
If the desired device has been found and is shown in the list, click it to establish a connection to it.
You will be asked whether the PINs of the two devices match.
If the PINs match, confirm this on both your computer and the device.
Both are now paired. On your computer, the device in the list is shown as .
Depending on the device type, you can now either see it as a storage device in GNOME Files, set a volume for it in the Sound settings or other things.
To connect to a paired Bluetooth device, select the device in the list. In the dialog that appears, turn the switch on. You can send files to the connected device by using the button. If you are connected to a device such as a mobile phone, you can use it as a network device by activating the appropriate option.
To remove a connected device from the list on your computer, click and confirm your choice. To completely remove the pairing, you also need to do so on your device.
Click › › › to open the Power settings module.
In the upper part of the dialog, you can see the current state of the battery.
In the section of the dialog, set the to conserve power. You can also set whether to dim the screen after a period of inactivity and set the time interval. You can also set whether to turn off wireless networking after the period of inactivity.
In the section of the dialog, set the . When you click it, a separate dialog opens.
In it, you can turn on automatic suspending and associated time intervals. If you are using a computer with a battery, you can set these separately for computer running on battery power or plugged in.
You can also set the action performed when the power button is pressed. Choose to use a mode where the computer turns off completely but saves your running session to the hard disk. Alternatively, choose or .
To modify keyboard shortcuts click › › › .
The dialog shows the keyboard shortcuts that are configured for your system. Click the categories on the right to view the current shortcuts.
To edit a key combination, first click the row. To set a new key combination, press the keys. To disable a shortcut, press <— instead.
To configure keyboard accessibility options, refer to Section 4.4, “Mobility Impairments”. To configure your keyboard layout, refer to Section 3.2.2, “Configuring Language Settings”.
To modify mouse and touchpad options, click › › › .
In the section of the dialog, you can set the orientation (left or right).
In the section of the dialog, use to adjust the sensitivity of the mouse pointer.
In the section of the dialog, you can turn the touchpad on and off. Use to adjust the sensitivity of the touchpad pointer. You can also disable the touchpad while typing and enable clicks by tapping the touchpad.
To test your settings, click and try the pointing device.
For configuration of mouse accessibility options, refer to the Section 4.4, “Mobility Impairments”.
The module lets you connect to any available local or remote CUPS server and configure printers.
To start the Printers module, click › › › . For detailed information, refer to Chapter 6, Managing Printers.
To specify resolution and orientation for your screen or to configure multiple screens, click › › › .
To find the right monitor, look for the numbers displayed in the upper left corner of all monitors after you have opened the dialog. To set options for a monitor, click the list item of the monitor. A new dialog appears.
If multiple monitors are attached to the computer, the left part of the dialog will allow you to choose how to use the monitor. You can choose between:
The screen that shows the panel and important messages.
A monitor that expands the desktop of the primary monitor.
A monitor that mirrors the image on the primary monitor. In terms of resolution, the lowest common denominator will be used.
A screen that is not used.
To rotate the displayed image, use the buttons with the arrows pointing left and right. To mirror the displayed image, use the button with the double arrow icon.
You can set a different resolution by changing the value next to . Not all resolutions provide a sharp and unstretched image. To find the best resolution for your monitor, refer to its manual.
When you are done, click .
The monitors will now readjust. This can take multiple seconds during which the screen can be black or distorted.
Afterward, a confirmation dialog will appear.
If the configuration looks correct, click .
If the configuration is not what you hoped for, click or wait for 20 seconds. The changes will then be reverted.
If you are using multiple screens, set up how they are arranged, so you can use the mouse pointer properly across monitors.
Click .
To find the right monitor, look for the numbers displayed in the upper left corner of all monitors. Click and drag the monitor image around to move it.
When you are done, click .
If the configuration looks correct, click .
If the configuration is not what you hoped for, click or wait for 20 seconds. The changes will then be reverted.
The tool lets you manage sound devices and set the sound effects. In the top part of the dialog, you can select the general output volume or turn the sound off completely.
To open the sound settings, click › › › .
Use the tab to select the device for sound output. Below the list, choose the sound device setting you prefer, for example balance.
Use the tab to set the input device volume or to mute the input temporarily. If you have more than one sound device, you can also select a default device for audio input in the list.
Use the tab to configure whether and how you want sound to be played when message boxes appear.
Specify the volume at which the sound effects will be played under . You can also turn the effects on and off.
Select the to use.
To set up networking options, click › › › .
In the appearing dialog, you can configure wired or wireless connections and proxies and VPNs.
To learn more about setting up network connections, see Chapter 28, Using NetworkManager.
In the following sections, you will find examples of how to configure some system aspects of your GNOME desktop. These include preferred applications, changing your user password, and session sharing preferences.
To learn more about configuring assistive technologies, see Chapter 4, Assistive Technologies.
For security reasons, it is a good idea to change your login password from time to time. To change your password:
Click › › › .
Click the button labeled with dots next to .
In the first text box, type your current password.
In the next text box, type a new password.
You can also click the cog wheel icon at the end of the text box to generate a random password.
Confirm your new password by typing it again in the last text box.
Click .
To change the default application for various common tasks such as browsing the Internet, sending mails or playing multimedia files, click › › › .
Click .
Select one of the available applications from the drop-down box. You can choose an application to handle Web, mail, calendar, music, videos or photographs.
The GNOME desktop includes assistive technologies to support users with various impairments and special needs, and to interact with common assistive devices. This chapter describes several assistive technology applications designed to meet the needs of users with physical disabilities like low vision or impaired motor skills.
To configure accessibility features, open the GNOME Settings dialog (for example using › › ) and click . Each assistive feature can be enabled separately using this dialog.
If you need a more direct access to individual assistive features, turn on in the dialog. A new menu will appear on the bottom panel.
In the section of the dialog, you can enable features that help people with impaired vision.
Turning on enables high contrast black and white icons in the GNOME desktop.
Turning on enlarges the font used in the user interface.
Turning on enables a screen magnifier. You can set the desired magnification and magnifier behavior, including color effects.
If the is turned on, any UI element or text that receives keyboard focus is read aloud.
If the are turned on, a sound is played whenever Num Lock or Caps Lock are turned on.
In the section of the dialog, you can enable features helping people with impaired hearing.
If the are turned on, a window title or the entire screen is flashed when an alert sound occurs.
In the and sections of the dialog, you can enable features that help people with mobility impairments.
If the is turned on, a virtual keyboard appears whenever you need to enter text. You can use the screen keyboard by clicking the virtual keys.
Click to open a dialog where you can enable various features that make typing easier.
With , you can turn accessibility features on or off by using the keyboard.
allows you to type key combinations one key at a time rather than having to hold down all of the keys at once. For example, the Alt–→| shortcut switches between windows.
With sticky keys turned off, you need to hold down both keys at the same time. With sticky keys turned on, press Alt and then →| to do the same.
Turn on if you want a delay between pressing a key and the letter being displayed on the screen. This means that you need to hold down each key you want to type for a little while before it appears. Use slow keys if you accidentally press several keys at a time when you type, or if you find it difficult to press the right key on the keyboard first time.
Turn on to ignore key presses that are rapidly repeated. This can help, for example, if you have hand tremors which cause you to press a key multiple times when you only want to press it once.
Turn on to control the mouse pointer using the numeric keypad on your keyboard.
Click to open a dialog where you can enable various features that make clicking easier: simulated secondary click and hover click.
Turn on to activate the secondary click (usually the right mouse button) by holding down the primary button for a predefined . This is useful if you find it difficult to move your fingers individually on one hand, or if your pointing device only has a single button.
Turn on to trigger a click by hovering your mouse pointer over an object on the screen. This is useful if you find it difficult to move the mouse and click at the same time. If this feature is turned on, a small Hover Click window opens and stays above all of your other windows. You can use this to choose what sort of click should happen when you hover. When you hover your mouse pointer over a button and do not move it, the pointer gradually changes color. When it has fully changed color, the button will be clicked.
Use the slider to adjust the according to your needs.
You can find further information in the GNOME help, which is also available online at https://help.gnome.org/users/gnome-help/3.20/a11y.html.en.
From your desktop, you can access files and directories or certain services on remote hosts or make your own files and directories available to other users in your network. openSUSE® Leap offers the following ways of accessing and creating network shared resources.
openSUSE® Leap makes it easy to print your documents, whether your computer is connected directly to a printer or linked remotely on a network. This chapter describes how to set up printers in openSUSE Leap and manage print jobs.
The Backup tool is a simple framework to let users back up and restore their own data such as home directories or selected files. It is possible to create scheduled backups or backups on request, and to play back a previous state of this data.
The GNOME Passwords and Keys program is an important component of the encryption infrastructure on your system. With this program, you can create and manage PGP and SSH keys, import, export and share keys, back up your keys and keyring, and cache your passphrase.
gFTP is a multithreaded file transfer client. It supports the FTP, FTPS (control connection only), HTTP, HTTPS, SSH, and FSP protocols. Furthermore, it allows the transfer of files between two remote FTP servers via FXP. To start gFTP, click › › .
From your desktop, you can access files and directories or certain services on remote hosts or make your own files and directories available to other users in your network. openSUSE® Leap offers the following ways of accessing and creating network shared resources.
Your file manager, GNOME Files, lets you browse your network for shared resources and services. Learn more about this in Section 5.3, “Accessing Network Shares”.
Using GNOME Files, configure your files and directories to share with other members of your network. Make your data readable or writable for users from any Windows or Linux workstation. Learn more about this in Section 5.4, “Sharing Directories”.
openSUSE Leap can be configured to integrate into an existing Windows network. Your Linux machine then behaves like a Windows client. It takes all account information from the Active Directory domain controller, just as the Windows clients do. Learn more about this in Section 5.5, “Managing Windows Files”.
You can configure a Windows network printer through the GNOME control center. Learn how to do this in Section 5.6, “Configuring and Accessing a Windows Network Printer”.
You can connect to a network with wired and wireless connections. To view your network connection, check the network icon in the right part of the main panel. If you click the icon, you can see more details in the menu. Click the connection name to see more details and access the settings.
To learn more about connecting to a network, see Chapter 28, Using NetworkManager.
Network browsing, be it SMB browsing for Windows shares or SLP browsing for remote services, relies heavily on the machine's ability to send broadcast messages to all clients in the network. These messages and the clients' replies to them enable your machine to detect any available shares or services.
For broadcasts to work effectively, your machine must be part of the same subnet as all other machines it is querying. If network browsing does not work on your machine or the detected shares and services do not meet your expectations, ensure that you are connected to the appropriate subnet.
To allow network browsing, your machine needs to keep several network ports open to send and receive network messages that provide details on the network and the availability of shares and services.
If you try to browse a network while a restrictive firewall is running on your machine, GNOME Files warns you that your security restrictions are not allowing it to query the network.
With your openSUSE Leap machine being an Active Directory client, you can browse, view and manipulate data located on Windows servers. The following examples are the most prominent ones:
Use GNOME Files's network browsing features to browse your Windows data.
Use GNOME Files to display the contents of your Windows user directory as you would for displaying a Linux directory. Create new files and directories on the Windows server.
Many GNOME applications allow you to open files on the Windows server, manipulate them and save them back to the Windows server.
GNOME applications, including GNOME Files, support Single Sign-On. This means that you do not need to re-authenticate when you access other Windows resources. These can be Web servers, proxy servers or groupware servers like Microsoft Exchange*. Authentication against all these is handled silently in the background using the user name and password you provided when you logged in.
To access your Windows data using GNOME Files, proceed as follows:
Open GNOME Files and click in the Places pane.
Double-click .
Double-click the icon of the workgroup containing the computer you want to access.
Click the computer’s icon (and authenticate if prompted to do so) and navigate to the shared directory on that computer.
To create directories in your Windows user directory using GNOME Files, proceed as you would when creating a Linux directory.
Being part of a corporate network and authenticating against a Windows Active Directory server, you can access corporate resources such as printers. GNOME allows you to configure printing from your Linux client to a Windows network printer.
To configure a Windows network printer for use through your Linux workstation, proceed as follows:
Start the GNOME control center from the main menu by clicking › › › .
The CUPS service is not started by default after installation of openSUSE Leap. If the dialog shows a message that the printing service is currently not available, you need to start the CUPS service manually.
Start it by opening a shell and typing:
tux >sudosystemctl start cups
Click and enter the
root password.
Click the plus icon.
Select a Windows printer connected via Samba.
To print to the Windows network printer configured above, select it from the list of available printers.
openSUSE® Leap makes it easy to print your documents, whether your computer is connected directly to a printer or linked remotely on a network. This chapter describes how to set up printers in openSUSE Leap and manage print jobs.
Before you can install a printer, you need to know the
root password and have your
printer information ready. Depending on how you connect the printer, you
might also need the printer URI, TCP/IP address or host, and the driver
for the printer. A number of common printer drivers ship with openSUSE Leap.
If you cannot find a driver for the printer, check the printer
manufacturer's Web site.
Click › › › .
Click and enter the root password.
Click the plus icon.
If there are too many printers in the list, filter them by entering an IP address or a keyword into the search field in the lower part of the dialog.
Select a printer from the list of available printers and click .
The installed printer appears in the Printers panel. You can now print to the printer from any application.
The Backup tool is a simple framework to let users back up and restore their own data such as home directories or selected files. It is possible to create scheduled backups or backups on request, and to play back a previous state of this data.
First schedule which data you want to back up and when to do it.
Click › › .
If you are opening the application for the first time, you will see a screen welcoming you. Click .
On the tab you can turn the on and off. You can also see the overview of the current settings.
On the tab, select a and a to which the backup should be written.
On the tab select the directories to back up
and directories to ignore. For example, if you want to back up your home
directory except for the Downloads directory, add
your home directory to the category
and your Downloads directory to the category
.
On the tab select how often to perform the automatic backups (daily or weekly) and how long to keep the backups.
(Optional) If you want to perform a backup immediately, too, switch back to the tab and click .
Choose whether you want the backup to be password-protected.
If so, type a password in the two text boxes next to and .
If not, click .
Click to start the backup process. When the backup is finished, the window will close.
To restore a previous state of your data, proceed as follows:
Select › › .
On the tab, click .
Choose the location from which to restore. Click . The tool searches for backups stored in that location.
Choose a date. Click .
Choose whether to restore the files to the original location or to another directory. Click to see a summary of your choices.
Click to start the restoration process.
The GNOME Passwords and Keys program is an important component of the encryption infrastructure on your system. With this program, you can create and manage PGP and SSH keys, import, export and share keys, back up your keys and keyring, and cache your passphrase.
Start the program by choosing › ›
Signing. Attaching electronic signatures to pieces of information, such as e-mail messages or software to prove its origin. To keep someone else from writing messages using your name, and to protect both you and the people you send them to, you should sign your mails. Signatures help you check the sender of the messages you receive and distinguish authentic messages from malicious ones.
Software developers sign their software so that you can check the integrity. Even if you get the software from an unofficial server, you can verify the package with the signature.
Encryption. You might also have sensitive information you want to protect from other parties. Encryption helps you transform data and make it unreadable for others. This is important for companies so they can protect internal information and their employees' privacy.
To exchange encrypted messages with other users, you must first generate your own pair of keys. It consists of two parts:
Public Key. This key is used for encryption. Distribute it to your communication partners, so they can use it to encrypt files or messages for you.
Private Key. This key is used for decryption. Use it to make encrypted files or messages from others (or yourself) legible again.
If others gain access to your private key, they can decrypt files and messages intended only for you. Never grant others access to your private key.
OpenPGP is a non-proprietary protocol for encrypting e-mail with the use of public-key cryptography based on PGP. It defines standard formats for encrypted messages, signatures, private keys, and certificates for exchanging public keys.
Click › › .
Click › .
Select and click .
Specify your full name and e-mail address.
Click to specify the following advanced options for the key.
An optional comment.
Specifies the encryption algorithms used to generate your keys. is the recommended choice because it lets you encrypt, decrypt, sign, and verify as needed. Both and allow only signing.
Specifies the length of the key in bits. The longer the key, the more secure it is (provided a strong passphrase is used). Keep in mind that performing any operation with a longer key requires more time than it does with a shorter key. Acceptable values are between 1024 and 4096 bits. At least 2048 bits are recommended.
Specifies the date at which the key will cease to be usable for performing encryption or signing operations. You will need to either change the expiration date or generate a new key or subkey after this amount of time passes. Sign your new key with your old one before it expires to preserve your trust status.
Click to create the new key pair.
The dialog opens.
Specify the passphrase twice for your new key, then click .
When you specify a passphrase, use the same practices you use when you create a strong password.
Secure Shell (SSH) is a method of logging in to a remote computer to execute commands on that machine. SSH keys are used in key-based authentication system as an alternative to the default password authentication system. With key-based authentication, there is no need to manually type a password to authenticate.
Click › › .
Click › .
Select , then click .
Specify a description of what the key is to be used for.
You can use your e-mail address or any other reminder.
Optionally, click to specify the following advanced options for the key.
Encryption Type. Specifies the encryption algorithms used to generate your keys. Select to use the Rivest-Shamir-Adleman (RSA) algorithm to create the SSH key. This is the preferred and more secure choice. Select to use the Digital Signature Algorithm (DSA) to create the SSH key.
Key Strength. Specifies the length of the key in bits. The longer the key, the more secure it is (provided a strong passphrase is used). Keep in mind that performing any operation with a longer key requires more time than it does with a shorter key. Acceptable values are between 1024 and 4096 bits. At least 2048 bits is recommended.
Click to create the new key, or click to create the key and set up another computer to use for authentication.
Specify the passphrase for your new key, click , then repeat.
When you specify a passphrase, use the same practices you use when you create a strong password.
You can modify properties of existing OpenPGP or SSH keys.
The descriptions in this section apply to all OpenPGP keys.
Click › › .
Double-click the PGP key you want to view or edit.
Use the options on the tab to add a photo to the key or to change the passphrase associated with the key.
Photo IDs allow a key owner to embed one or more pictures of themselves in a key. These identities can be signed like normal user IDs. A photo ID must be in JPEG format. The recommended size is 120×150 pixels.
If the chosen image does not meet the required file type or size, can resize and convert it on the fly from any image format supported by the GDK library.
Click the tab to add a user ID to a key.
See Section 8.3.1.1, “Adding a User ID” for more information.
Click the tab, which contains the following properties:
Key ID: The Key ID is similar to the Fingerprint, but the Key ID contains only the last eight characters of the fingerprint. It is generally possible to identify a key with only the Key ID, but sometimes two keys might have the same Key ID.
Type: Specifies the encryption algorithm used to generate a key. DSA keys can only sign. ElGamal keys are used to encrypt.
Strength: Specifies the length, in bits, of the key. The longer the key, the more security it provides. However, a long key will not compensate for the use of a weak passphrase.
Fingerprint: A unique string of characters that exactly identifies a key.
Created: The date the key was created.
Expires: The date the key can no longer be used (a key can no longer be used to perform key operations after it has expired). Changing a key's expiration date to a point in the future re-enables it. A good general practice is to have a master key that never expires and multiple subkeys that do expire and are signed by the master key.
Override Owner Trust: Here you can set the level of trust in the owner of the key. Trust is an indication of how sure you are of a person's ability to correctly extend the Web of trust. When there is a key that you have not signed, the validity of the key is determined from its signatures and how much you trust the people who made those signatures.
Export Secret Key: Exports the key to a file.
Subkeys: See Section 8.3.1.2, “Editing OpenPGP Subkey Properties” for more information.
Click .
User IDs allow multiple identities and e-mail addresses to be used with the same key. Adding a user ID is useful, for example, when you want to have an identity for your job and one for your friends. They take the following form:
Name (COMMENT) <E-MAIL>
Click › › .
Double-click the PGP key you want to view or edit.
Click the tab, then click .
Specify a name in the field.
You must enter at least five characters in this field.
Specify an e-mail address in the field.
Your e-mail address is how most people will locate your key on a key server or other key provider. Make sure it is correct before continuing.
In the field, specify additional information that will display in the name of your new ID.
This information can be searched for on key servers.
Confirm your changes and enter the passphrase when prompted for it.
Each OpenPGP key has a single master key used to sign only. Subkeys are used to encrypt and to sign as well. In this way, if your subkey is compromised, you do not need to revoke your master key.
Click › › .
Double-click the PGP key you want to edit.
Click the tab, then click to show the category.
Use the buttons on the left of the dialog to add, delete, expire, or revoke subkeys.
Each subkey has the following information:
ID: The identifier of the subkey.
Type: Specifies the encryption algorithm used to generate a subkey. DSA keys can only sign, ElGamal keys are used to encrypt, and RSA keys are used to sign or to encrypt.
Usage: Shows if the key can be used to sign, to certify, or also to encrypt.
Created: Specifies the date the key was created.
Expires: Specifies the date the key can no longer be used.
Status: Specifies the status of the key.
Strength: Specifies the length, in bits, of the key. The longer the key, the more security it provides. However, a long key will not compensate for the use of a weak passphrase.
Click .
The descriptions in this section apply to all SSH keys.
Click › › .
Double-click the Secure Shell key you want to view or edit.
Use the options on the tab to change the name of the key or the passphrase associated with the key.
Click the tab, which contains the following properties:
Algorithm: Specifies the encryption algorithm used to generate a key.
Strength: Indicates the length in bits of a key. The longer the key, the more security it provides. However, a long key does not make up for the use of a weak passphrase.
Location: The location where the private key has been stored.
Fingerprint: A unique string of characters that exactly identifies a key.
Export Complete Key: Exports the key to a file.
Click .
Keys can be exported to text files. These files contain human-readable text at the beginning and at the end of a key. This format is called an ASCII-armored key.
To import keys:
Click › › .
Click › .
Select a file containing at least one ASCII-armored public key.
Click to import the key.
You can also paste keys inside :
Select an ASCII-armored public block of text, then copy it to the clipboard.
Click › › .
Click ›
To export keys:
Click › › .
Select the keys you want to export.
Click › .
Specify a file name and location for the exported key.
Click to export the key.
You can also export keys to the clipboard in an ASCII-armored block of text:
Click › › .
Select the keys you want to export.
Click › .
Signing another person's key means that you are giving trust to that person. Before signing a key, carefully check the key's fingerprint to ensure that the key really belongs to that person.
Trust is an indication of how sure you are of a person's ability to correctly extend the Web of trust. When there is a key that you have not signed, the validity of the key is determined from its signatures and how much you trust the people who made those signatures.
Passwords and Keys integrates with GNOME Files. You can encrypt, decrypt, sign, verify files, and import public keys from the file manager window without launching .
The package nautilus-extension-seahorse has to be
installed to enable file manager integration.
In GNOME Files, right-click the files you want to encrypt.
Select .
Select the people (recipients) you want to encrypt the file to, then click .
If prompted, specify the passphrase of your private key, then click .
In GNOME Files, right-click the files you want to sign.
Select .
Select a signer, then click .
If prompted, specify the passphrase of your private key, then click .
To decrypt an encrypted file in GNOME Files, simply double-click the file you want to decrypt.
If prompted, specify the passphrase of your private key.
To verify files, simply double-click the detached signature file. Detached
signature file names often have a .sig extension.
You can use password keyring preferences to create or remove keyrings, to set the default keyring for application passwords or to change the unlock password of a keyring. To create a new keyring, follow these steps:
Click › › .
Click › › , then click .
Enter a name for the keyring and click .
Set and confirm a new for the keyring and click .
To change the unlock password of an existing keyring, right-click the keyring in the tab and click . You need to provide the old password to be able to change it.
To change the default keyring for application passwords, right-click the keyring in the tab and click .
You can keep your keys up-to-date by synchronizing keys periodically with remote keyservers. Synchronizing will ensure that you have the latest signatures made on all of your keys, so that the Web of trust will be effective.
Click › › .
Click › , then click the tab.
provides support for HKP and LDAP keyservers.
HKP Key Servers:
HKP key servers are ordinary Web-based key servers, such as the popular
hkp://pgp.mit.edu:11371, also accessible at
http://pgp.mit.edu.
LDAP Key Servers:
LDAP key servers are less common, but use the standard LDAP protocol to
serve keys. ldap://keyserver.pgp.com is a good LDAP
server.
You can or key servers to be used using the buttons on the left. To add a new key server, set its type, host and port, if necessary.
Set whether you want to automatically publish your public keys and which keyserver to use. Set whether you want to automatically retrieve keys from key servers and whether to synchronize modified keys with keyservers.
Click .
Key Sharing is provided by DNS-SD, also known as Bonjour or Rendezvous. Enabling key sharing adds the local users' public key rings to the remote search dialog. Using these local key servers is generally faster than accessing remote servers.
Click › › .
Click › , then click the tab.
Select .
Click .
gFTP is a multithreaded file transfer client. It supports the FTP, FTPS (control connection only), HTTP, HTTPS, SSH, and FSP protocols. Furthermore, it allows the transfer of files between two remote FTP servers via FXP. To start gFTP, click › › .
There are two common ways of transferring files via FTP: ASCII and binary.
ASCII mode transfers files as text. ASCII files are
.txt, .asp,
.html, and .php files, for
example. Binary mode transfers files as raw data.
Binary files are .wav, .jpg,
.gif, and mp3 files, for example.
To change the transfer mode, click the menu and select or .
When transferring ASCII files from Linux/Unix to Windows or vice versa, open the dialog by clicking › . Switch to the tab and select to ensure that newline characters are correctly converted. This option will automatically be disabled in Binary mode.
To connect to a remote server, do the following:
Click › .
Specify a URL to connect to and click .
Specify your user name and click . Then specify your password and click . To connect anonymously, leave the user name blank.
If the connection is successful, the right part of the gFTP window lists files from the remote computer. The file listing on the left side continues to show files from your local computer. You can now upload and download files via drag and drop or by using the arrow buttons.
To bookmark a site you access frequently, click › . Specify a name for the bookmark, then click . The new bookmark is added to your list of bookmarks.
In the following figure, the file list on the right contains the remote server's directory of files. The file list on the left side contains your local computer's directory of files (on your hard disk or network).
To download files, select the files you want to download in the remote list of files, then click the arrow button pointing to the left. The progress of each download is listed in the field in the middle of the window. If the transfer is successful, the files appear in the directory listing on the left.
To upload a file, select the files you want to upload in your local directory listing on the left, then click the arrow button pointing to the right. The progress of each download is listed in the field in the middle of the window. If the transfer is successful, the files appear in the remote directory listing on the right.
To modify preferences for your downloads, select › from the menu.
To set up an HTTP proxy server, do the following:
From the menu, select › , then select the tab.
Enter the and . If applicable, also provide your login credentials for the proxy server. Choose a proxy type from the drop-down box.
Click the tab, and enter the same proxy server information in the dialog as described above. Port numbers for FTP and HTTP proxy may differ.
Click .
You can find more information about gFTP at http://www.gftp.org.
LibreOffice is an open source office suite that provides tools for all types of office tasks such as writing texts, working with spreadsheets, or creating graphics and presentations. With LibreOffice, you can use the same data across different computing platforms. You can also open and edit files in other formats, including Microsoft* Office* formats, then save them back to this format, if needed. This chapter contains information that applies to all LibreOffice modules.
LibreOffice Writer is a full-featured word processor with page and text formatting capabilities. Its interface is similar to interfaces of other major word processors, and it includes some features that are usually found only in desktop publishing applications.
This chapter highlights a few key features of Writer. For more information about these features and for complete instructions for using Writer, look at the LibreOffice help or at the sources listed in Section 10.11, “For More Information”.
Much of the information in this chapter can also be applied to other LibreOffice modules. For example, other modules use styles similarly to how they are used in Writer.
Calc is the LibreOffice spreadsheet module. Spreadsheets consist of several sheets, containing cells which can be filled with elements like text, numbers, or formulas. A formula can manipulate data from other cells to generate a value for the cell in which it is inserted. Calc also allows you to def…
Besides LibreOffice Writer and LibreOffice Calc, LibreOffice also includes the modules Impress, Base, Draw, and Math. With these you can create presentations, design databases, draw up graphics and diagrams, and create mathematical formulas.
LibreOffice is an open source office suite that provides tools for all types of office tasks such as writing texts, working with spreadsheets, or creating graphics and presentations. With LibreOffice, you can use the same data across different computing platforms. You can also open and edit files in other formats, including Microsoft* Office* formats, then save them back to this format, if needed. This chapter contains information that applies to all LibreOffice modules.
LibreOffice consists of several application modules (subprograms) which are designed to integrate with each other. While this chapter contains information that applies to all LibreOffice modules, the following chapters and sections contain information on individual modules. Find a short description and where each module is described in Table 10.1, “The LibreOffice Application Modules”.
A full description of each module is available in the application help, described in Section 10.11, “For More Information”.
|
Module |
Purpose |
Described in |
|---|---|---|
|
Writer |
Word processor module | |
|
Calc |
Spreadsheet module | |
|
Impress |
Presentation module | |
|
Base |
Database module | |
|
Draw |
Module for drawing vector graphics | |
|
Math |
Module for generating mathematical formulas |
To start LibreOffice, click › › . In the LibreOffice start center, choose the type of document you want to create.
There are multiple methods to directly start one of the LibreOffice modules:
If any LibreOffice module is open, you can start any of the other modules by clicking › and then selecting the type of document you want to create.
You can also start individual LibreOffice modules from the menu .
As an alternative, use the command libreoffice and one
of the options
--writer, --calc,
--impress, --draw, or
--base to start the respective module.
LibreOffice has many command line options, especially for allowing document
conversions. To learn more about the command line options of LibreOffice, see
libreoffice --help or the man page of LibreOffice
(man libreoffice(1)).
Before you start working with LibreOffice, you may be interested in changing some options from the preferences dialog. Click › to open it. The most important ones are:
Specify your user data such as company, first and last name, street, city, and other useful information. This data has many uses: It is used in the comment functions of Writer and Calc, for authorship information in PDF documents, and for serial letters in Writer.
Map font names to installed fonts. This can be useful if you exchange documents with others and the document you received contains fonts that are not available on your system.
Contains loading and saving specific options. For example, you can choose whether to always create a backup copy and which file format LibreOffice should use by default.
To learn more about configuring LibreOffice, see Section 10.8, “Changing the Global Settings”.
The user interface of most of LibreOffice is very similar across its modules:
At the top of the application, there is the menu bar which gives access to almost all functionality of LibreOffice. The menu bar can be customized to include more or fewer functions. You can also add and remove menus.
By default, the toolbars are positioned directly below the menu bar. The toolbars comprise the most used and most important items of the module.
To dock a toolbar to any other side of the window, drag it to the right position. To make a toolbar float, drag it into the middle of the window. They can be customized to include more or fewer functions. You can also add and remove toolbars.
By default, the side bar is positioned at the right side of the LibreOffice window. On the first start of LibreOffice, it is only visible as several icons stacked vertically. Clicking one of the icons opens a panel with more elements. Click the icon again to close the panel. Similarly to the toolbars, the side bar comprises the most important functions.
To dock the side bar to the left or right side of the window, drag it to the right position. To make the side bar float, drag it into the middle of the window. To hide the side bar, click the vertical arrowhead button on the document-facing side of the side bar.
You can hide or show side bar panels but cannot customize their functionality.
The statusbar is displayed at the bottom of the window. It mainly shows information about the document, such as the number of words (in Writer) or the sum of values of selected cells (in Calc). However, it can also be used to change the zoom or language settings. Many elements open additional menus or dialogs on left click, right click, or double click.
For more information on customizing LibreOffice, see Section 10.7, “Customizing LibreOffice”.
The native file format of LibreOffice is the OpenDocument format. OpenDocument is an ISO-standardized format for office documents that is based on XML. However, LibreOffice can also work with documents, spreadsheets, presentations, and databases in many other formats, including Microsoft Office formats. Files in Microsoft Office formats can be opened and saved back normally.
If you use LibreOffice in an environment where you need to share documents with Microsoft Word users, you should have little or no trouble exchanging document files. However, very complex documents can require editing after opening. Complex documents are documents containing, for example, complicated tables, Microsoft Office macros, or unusual fonts, formatting, or graphical objects.
In case there should ever be issues with opening documents, try the following strategies:
Text Documents. Consider opening text documents in the original application and saving them as RTF or plain text (TXT). However, saving as plain text means that all formatting will be lost.
Spreadsheets. Consider opening spreadsheets in the original application and saving them as Excel files. If this does not work, try the CSV format. However, saving as CSV means that all formatting, cell type definitions, formulas, and macros will be lost.
LibreOffice can read, edit, and save documents in several formats. It is not necessary to convert files from those formats to the OpenDocument format used by LibreOffice to use those files. However, if you want to convert the files, you can do so. To convert several documents, such as when first switching to LibreOffice, do the following:
Select › › .
Choose the file format from which to convert.
Click .
Specify where LibreOffice should look for templates and documents to convert and in which directory the converted files should be placed.
Documents retrieved from a Windows partition are usually in a
subdirectory of /windows.
Make sure that all other settings are correct, then click .
Review the summary of the actions to perform, then start the conversion by clicking .
The amount of time needed for the conversion depends on the number of files and their complexity. For most documents, conversion does not take long.
When everything is done, close the Wizard.
You can save files, no matter in which LibreOffice format, with a password. Unlike older versions of LibreOffice, the encryption applied to the document with recent versions of LibreOffice is very strong. However, this encryption does not protect file names and file sizes of encrypted files. If that is important to you, see the alternate encryption methods described in Chapter 11, Encrypting Partitions and Files.
To save a file with a password, select › or › .
In the dialog that opens, activate the check box at the bottom and click .
Type and confirm your password, then click .
The next time you open the file, you will be prompted for the password.
To change the password, do either of the following:
Overwrite the same file by selecting › . Make sure is deactivated.
Select › and click to access the password dialog.
You can digitally sign documents to protect them. For this, you need a personal certificate, similar to an HTTPS certificate. You can either create a self-signed certificate or choose to obtain one from a Certificate Authority.
When applying a digital signature to a document, a kind of checksum is created from the document's content and your personal key. The checksum is stored together with the document.
When another person opens the document, the checksum will be generated again. The new checksum is then compared to the original checksum. If both are equal, the application will signal that the document has not been changed in the meantime.
To add a certificate to LibreOffice, you need to use Firefox:
Start Firefox by selecting › › .
Go to the certificates preferences by opening the menu
(
), then select
› › › .
Add your certificate by selecting and clicking and then locate your certificate.
To sign a document, first open it in LibreOffice. Then select › › . Select the certificate you want to use for signing, then click .
openSUSE Leap allows you to access certificates from the certificate store. For more information, refer to Chapter 12, Certificate Store.
You can customize LibreOffice to best suit your needs and working style. Toolbars, menus, and key combinations can all be reconfigured to help you more quickly access the features you use the most.
You can also assign macros to application events if you want specific actions to occur when those events take place. For example, if you always work with a specific spreadsheet, you can create a macro that opens the spreadsheet and assign the macro to the event.
This section contains simple, generic instructions for customizing your environment. The changes you make are effective immediately. This means you can see if the changes are what you wanted and go back and modify them if they are not. See the LibreOffice help files for detailed instructions.
To access the customization dialog in any open LibreOffice module, select › .
Click for more information about the options in the dialog.
In the customization dialog, click the tab .
From the drop-down box , select the toolbar you want to customize.
Activate the check boxes next to the commands you want to appear on the toolbar, and deactivate the check boxes next to the commands you do not want to appear. A short description for each command is shown at the bottom of the dialog.
With , select whether to save your customized toolbar in the current LibreOffice module or in the current document. If you decide to save it in the LibreOffice module, the customized toolbar is used whenever you open that module. If you decide to save it together with the current document, the customized toolbar is used whenever you open that document.
Repeat to customize additional toolbars.
Click .
To switch back to the original settings again, open the customization dialog, click the drop-down box and select . Click and to proceed.
Click the arrow icon at the right edge of the toolbar you want to change.
Click to display a list of buttons.
Select the buttons in the list to enable (check) or disable (uncheck) them.
You can add or delete items from current menus, reorganize menus, and even create new menus.
Click › › .
Select the menu you want to change, or click to create a new menu.
Modify, add, or delete menu items as desired.
Click .
You can reassign currently assigned key combinations and assign new ones to frequently used functions.
Click › › .
Select the keys you want to assign to a combination.
Select a and an appropriate .
Click to assign the function to the key or to remove an existing assignment.
Click .
LibreOffice also provides ways to assign macros to events such as application start-up or the saving of a document. The assigned macro runs automatically whenever the selected event occurs.
Click › › .
Select the event you want to change.
Assign or remove macros for the selected event.
Click .
Global settings can be changed in any LibreOffice module by clicking › on the menu bar. This opens the window shown in the figure below. A tree structure is used to display categories of settings.
The settings categories that appear depend on the module you are working in. For example, if you are in Writer, the LibreOffice Writer category appears in the list, but the LibreOffice Calc category does not. The LibreOffice Base category appears in both Calc and Writer. The Module column in the table shows where each setting category is available.
The following table lists the settings categories along with a brief description of each category:
|
Settings Category |
Description |
Module |
|---|---|---|
|
|
Basic settings, including your user data (such as your address and e-mail), important paths, and settings for printers and external programs. |
All |
|
|
Settings related to the opening and saving of several file types. There is a dialog for general settings and several special dialogs to define how external formats should be handled. |
All |
|
|
Settings related to languages and writing aids, such as your locale and spell checker settings. This is also the place to enable support for Asian languages. |
All |
|
|
Settings related to word processing, such as the basic units, fonts and layout that Writer should use. |
Writer |
|
|
Settings related to the HTML authoring features of LibreOffice. |
Writer |
|
|
Settings related to spreadsheets, such as spreadsheet appearance, Microsoft Excel compatibility options, and calculation options. |
Calc |
|
|
Settings related to presentations, such as enabling the smartphone remote control and the grid of the page to use. |
Impress |
|
|
Settings related to drawings, such as the grid of the page to use. |
Draw |
|
|
Allows setting and editing database connections and registered databases. |
Base |
|
|
Allows defining the default colors used for newly created charts. |
All |
|
|
Allows configuring a proxy and the e-mail software to use. |
All |
All settings listed in the table apply globally for the specified modules. That means, they are used as defaults for every new document you create.
A template is a document containing only the styles—and content— that you want to appear in every document of that type. When a document is created or opened with the template, the styles are automatically applied to that document. Templates greatly enhance the use of LibreOffice by simplifying formatting tasks for a variety of different types of documents.
For example, in a word processor, you can write letters, memos, and reports, all of which look different and require different styles. Or, for example, for spreadsheets, you could use different cell styles or headings for certain types of spreadsheets. If you use templates for each of your document types, the styles you need for each document are always readily available.
LibreOffice comes with a set of predefined templates. You can also find additional templates on the Internet, for example at http://templates.libreoffice.org. For details, see Section 10.11, “For More Information”.
Creating your own templates requires some planning. You need to determine how you want the document to look, so you can create the styles you need in that template.
A detailed explanation of templates is beyond the scope of this section. Procedure 10.8, “Creating LibreOffice Templates” only shows how to generate a template from an existing document.
For text documents, spreadsheets, presentations, and drawings, you can create a template from an existing document as follows:
Start LibreOffice and open or create a document that contains the styles and content that you want to re-use for other documents of that type.
Click › › .
Choose a directory to save the image in by double-clicking one of the directory icons.
If you are in a subdirectory and want to go up again, use the path bar displayed above the directories.
From the toolbar, choose .
Specify a name for the template.
Click .
You can convert Microsoft Word templates like you would convert any other Word document. For more information, see Section 10.4.2, “Converting Documents to the OpenDocument Format”.
When exchanging documents with other people, it is sometimes useful to store metadata like the owner of the file, who it was received from, and a URL. LibreOffice lets you attach such metadata to the file. This helps you track metadata which you do not want to or cannot save in the content of the file. This feature is also the basis for later sorting, searching and retrieving your documents based on metadata.
As an example, we assume you want to set these properties to your file:
A title, subject, and some keywords
The owner of the file
Who sent you the file
To attach such metadata to your document, proceed as follows:
Click › . A dialog opens. It has, among others, the following tabs:
Change to the tab and insert title, subject, and your keywords.
Switch to the tab.
To add a row for a property, click .
In the column, click the drop-down box for the entry. A list of properties appears, from it, choose .
Insert the name of the owner in the column.
Repeat from Step 4 but as the name of the property, this time, choose .
Optionally, repeat from Step 4 for more properties.
To remove a property, click the red icon at the end of the corresponding row.
Leave the dialog with .
Save the file.
LibreOffice contains extensive online help. In addition, a large community of users and developers support it. The following list shows some places where you can go for additional information.
Extensive help on performing any task in LibreOffice.
Home page of LibreOffice
Official question and answer page for LibreOffice.
Taming LibreOffice: books, news, tips and tricks.
Extensive information about creating and using macros.
Extension and template directory for LibreOffice.
Templates for creating labels with LibreOffice.
LibreOffice Writer is a full-featured word processor with page and text formatting capabilities. Its interface is similar to interfaces of other major word processors, and it includes some features that are usually found only in desktop publishing applications.
This chapter highlights a few key features of Writer. For more information about these features and for complete instructions for using Writer, look at the LibreOffice help or at the sources listed in Section 10.11, “For More Information”.
Much of the information in this chapter can also be applied to other LibreOffice modules. For example, other modules use styles similarly to how they are used in Writer.
There are multiple ways to create a new Writer document:
From Scratch. To create a new empty document, click › › .
Using a Wizard. To use a standard format and predefined elements for your own documents, use a wizard. Click › › and follow the steps.
From a Template. To use a template, click › › and open, for example, . From the list of text document templates, select the one that fits your needs.
For example, to create a business letter, click › › . Using the wizard, you can easily create a basic document using a standard format. A sample wizard dialog is shown in Figure 11.1.
Enter text in the document window as desired. Use the tools for applying and changing styles or the tools for direct formatting to adjust the appearance of the document. Use the menu or the relevant buttons in the toolbar to print and save your document. With the options under , add extra items to your document, such as a table, picture, or chart.
The traditional way of formatting office documents is direct formatting. That means, you use a button, such as , which sets a certain property (in this case, a bold typeface). With styles, you can bundle a set of properties (for example, font size and font weight) and give them a speaking name, such as Headline, first level. Using styles, rather than direct formatting has the following advantages:
Gives your pages, paragraphs, texts, and lists a consistent look.
Makes it easy to consistently change formatting later.
Allows reuse and import of styles from another document.
Change one style and its properties are passed on to its descendants.
Imagine that you emphasize text by selecting it and clicking the button . Later, you decide you want the emphasized text to be italicized. Now, without styles, you need to find all bold text and manually change it to italics.
If you had used a character style from the beginning, however, you would only need to change the style from bold to italics once. All text formatted with a style changes its appearance as the style is changed.
LibreOffice can use styles for applying consistent formatting to various elements in a document. The following types of styles are available in Writer:
|
Type of Style |
Function |
|---|---|
|
|
Applies standardized formatting to the various types of paragraphs in your document. For example, apply a paragraph style to a first-level heading to set the font and font size, spacing above and below the heading, location of the heading, and other formatting specifications. |
|
|
Applies standardized formatting for types of text. For example, if you want emphasized text to appear in italics, you can create an emphasis style that italicizes selected text when you apply the style to it. |
|
|
Applies standardized formatting to frames. For example, if your document uses marginal notes, you can create frames with specified borders, location, and other formatting, so that all of your marginal notes have a consistent appearance. Frames are also used for captioning images: A frame can keep the caption and the image together. Here, you can use frame style to make sure that all your images have the same size and background color, for example. |
|
|
Applies standardized formatting to a specified type of page. For example, if every page of your document contains a header and footer except for the first page, you can use a first page style that disables headers and footers. You can also use different page styles for left and right pages so that you have bigger margins on the insides of pages and your page numbers appear on an outside corner. |
|
|
Applies standardized formatting to specified list types. For example, you can define a checklist with square check boxes and a bullet list with round bullets, then easily apply the correct style when creating your lists. |
Direct formatting overrides any styles you have applied. For example, format a piece of text both with a character style and using the button . Now, the text will be bold, no matter what is set in the style.
To remove all direct formatting, first select the appropriate text, then right-click it and choose .
Likewise, if you manually format paragraphs using › , you can end up with inconsistent paragraph formatting. This is especially true if you copy and paste paragraphs from other documents with different formatting. However, if you apply paragraph styles, formatting remains consistent. If you change a style, the change is automatically applied to all paragraphs formatted with that style.
The side bar panel is a versatile
formatting tool for applying styles to text, paragraphs, pages, frames, and
lists. To open this panel, click
› , click the button (a T) in the side bar or press
F11.
LibreOffice comes with several predefined styles. You can use these styles as they are, modify them, or create new styles. Use the icons at the top of the panel to display formatting styles for the most common elements such as paragraphs, frames, pages or lists. To learn more about styles, continue with the instructions below.
To apply a style, select the element you want to apply the style to, and double-click the style in the panel . For example, to apply a style to a paragraph, place the cursor anywhere in that paragraph and double-click the desired paragraph style.
Alternatively, use the paragraph style selector in the toolbar .
By changing styles, you can change formatting throughout a document, rather than applying the change separately everywhere you want to apply the new formatting.
To change an existing style, proceed as follows:
In the panel , right-click the style you want to change.
Click .
Change the settings for the selected style.
For information about the available settings, refer to the LibreOffice online help.
Click or .
LibreOffice comes with a collection of styles to suit many needs of most users. However, if you need a style that does not yet exist and want to create your own style, follow the procedure below:
Open the panel with › , or pressing F11.
Make sure you are in the list of styles for the type of style you want to create.
For example, if you are creating a character style, make sure you are in the character style list by clicking the corresponding icon in the panel .
Right-click anywhere in the list of styles in the panel .
To open the style dialog, click . The tab is preselected.
Configure three basic properties of the new style:
The name of your style. Choose any name you like.
The style that follows your style. The style selected here is used when starting a new paragraph by pressing Enter. This is useful, for example, for headlines, after which you usually want to start a normal paragraph of text.
A style that your style depends on. If the selected style is changed, your style changes as well. For example, to make headers consistent, create a “parent” header style and have subsequent headers depend on it. This is useful when you only want to change the properties that need to be different.
For details about the style options available in any tab, click the button of the dialog.
Confirm with . This closes the window.
Let us assume, you need a note with a different background and borders. To create such a style, proceed as follows:
Press F11. The panel opens.
Make sure you are in the list by checking that the pilcrow icon (¶) is selected.
Right-click anywhere in the list of styles in the panel and select .
Specify the following parameters in the tab :
|
|
Note |
|
|
Note |
|
|
- None - |
|
|
Custom Styles |
Change the indentation in the tab , using the text field . If you also want more space above and below individual paragraphs, change the values in the and accordingly.
Switch to the tab and choose a color for the background.
Switch to the tab and determine your line arrangements, line style, color and other parameters.
Confirm with . This closes the window.
Select your text in your document and double-click the style . Your style parameters are applied to the text.
If you want to create double-sided printouts of your documents, especially if they are supposed to be bound, use templates for even and odd pages. To create page styles for this, proceed as follows:
Press F11. The panel opens.
Make sure you are in the list by checking that the paper sheet icon is selected.
Right-click anywhere in the list of styles in the panel and select .
Enter the following parameters in the tab :
|
|
Left Content Page |
|
|
Leave empty, will be changed later |
|
|
not applicable |
|
|
not applicable |
Change additional parameters as you like in the other tabs. You can also adapt the page format and margins (tab ) or any headers and footers.
Confirm with . This closes the window.
Follow the instruction in
Procedure 11.3, “Create an Even (Left) Page Style” but use the string
Right Content Page in the
tab.
Select the entry from the drop-down box .
Choose the same parameters as you did for the left page style. If you used different sizes for the left and right margin of your even page, mirror these values in your odd pages.
Confirm with . This closes the window.
Then connect the left page style with the right page style:
Right-click the entry and choose .
Choose from the drop-down box .
Confirm with . This closes the window.
To attach your style, make sure your page is a left (even) page and double-click . Whenever your text exceeds the length of a page, the following page automatically receives the alternative page style.
You can use Writer to work on large documents. Large documents can be either a single file or a collection of files assembled into a single document.
If you are working with a very large document, such as a book, it can be easier to manage the book with a master document, rather than keeping the book in a single file. A master document enables you to quickly apply formatting changes to a large document or to jump to each subdocument for editing.
A master document is a Writer document that serves as a container for multiple Writer files. You can maintain chapters or other subdocuments as individual files collected in the master document. Master documents are also useful if multiple users are working on a single document. You can separate each user’s section of the document into subdocuments collected in a master document, allowing multiple writers to work on their subdocuments at the same time without fear of overwriting the work of others.
Click › .
or
Open an existing document and click › › .
The window will open. In it, select
(
), then choose .
Select a file to add an existing file to the master document.
In the window or panel , choose
(
), then select .
A file chooser opens, to allow saving the new document. Specify a name, then click .
When you are done editing the new document, save it. Then switch back to the master document.
Update the master document with the contents of the new document. To do so, right-click the entry of your new document in the , then click › .
To enter some text directly into the master document, select › .
The LibreOffice help files contain more complete information about working with master documents. Look for the topic named Using Master Documents and Subdocuments.
The styles from all of your subdocuments are imported into the master document. To ensure that formatting is consistent throughout your master document, use the same template for each subdocument. Doing so is not mandatory.
However, if subdocuments are formatted differently, you might need to do some reformatting to successfully bring subdocuments into the master document without creating inconsistencies. For example, if two documents within a master document include styles with the same name, the master document will use the formatting specified for the style in the document imported first.
In addition to being a full-featured word processor, Writer also functions as an HTML editor. You can style HTML pages like any other document, but there are specific that help with creating good HTML. You can view the document as it will appear online, or you can directly edit the HTML code.
Click › › .
Press F11 to open the panel .
At the bottom of the panel , click the drop-down box to open it.
Select .
Create your HTML page, using the styles to tag your text.
Click › .
Select the location where you want to save your file and name the file. Make sure that in the bottom drop-down box, is selected.
Click .
To edit HTML code directly or to see the HTML code created when you edit the HTML file as a Writer document, click › . In HTML Source mode, the list is not available.
The first time you switch to mode, you are prompted to save the file as HTML, if you have not already done so.
To switch back from mode to Web Layout, click › again.
Calc is the LibreOffice spreadsheet module. Spreadsheets consist of several sheets, containing cells which can be filled with elements like text, numbers, or formulas. A formula can manipulate data from other cells to generate a value for the cell in which it is inserted. Calc also allows you to define ranges, filter and sort data and creates charts from data to present it graphically. Using pivot tables, you can combine, analyze or compare larger amounts of data.
This chapter can only introduce some very basic Calc functionality. For more information and for complete instructions, see the LibreOffice application help and the sources listed in Section 10.11, “For More Information”.
Calc can process many VBA macros in Excel documents. However, support for VBA macros is not complete. When opening an Excel spreadsheet that makes heavy use of macros, you might discover that some do not work.
There are two ways to create a new Calc document:
From Scratch. To create a new empty document, click › › .
From a Template. To use a template, click › › and open, for example, . From the list of spreadsheet templates, select the one that fits your needs.
Access the individual sheets by clicking their respective tabs at the bottom of the window.
Enter data in the cells as desired. To adjust the appearance, either use the toolbar or side bar panel, or use the menu—or define styles as described in Section 12.2, “Using Formatting and Styles in Calc”. Use the menu or the relevant buttons in the toolbar to print and save your document.
Calc comes with a few built-in cell and page styles to improve the appearance of your spreadsheets and reports. Although these built-in styles are adequate for many uses, you will probably find it useful to create styles for your own frequently used formatting preferences.
Click › › or press F11.
At the top of the panel , click either (a green cell) or (a document).
Right-click anywhere in the list of styles in the panel . Then click .
Specify a name for the style and use the various tabs to set the desired formatting options.
When you are done configuring the style, click .
Click › › .
At the top of the panel , click either (a green cell) or (a document).
Right-click the name of the style you want to change, then click .
Change the desired formatting options.
When you are done configuring the style, click .
To apply a style to specific cells, select the cells you want to format. Then double-click the style you want to apply in the window.
Sheets are a good method to organize your calculations. For example, if you have a business, accounting might be much clearer if you create a sheet for each month.
To insert a new sheet after the last sheet, click the button next to the sheet tabs.
To insert one or more new sheets into your spreadsheet from a file or at a specific position at once, do the following:
Right-click a sheet tab and select . A dialog opens.
Decide whether the new sheet should be positioned before or after the selected sheet.
To create a new sheet, make sure the radio button is activated. Enter the number of sheets and the sheet name. Skip the rest of this step.
Alternatively, to import a sheet from another file, do the following:
Select and click .
Select the file name and confirm with . All the sheet names are now displayed in the list.
Select the sheet names you want to import by holding the Shift key and clicking them.
To add the sheet or sheets, confirm with .
To rename a sheet, right-click the tab of the sheet and select . Alternatively, you can also double-click the sheet tab.
To delete one or multiple sheets, do the following: Select the sheet you want to delete. To select more than one sheet, hold down Shift while making the selection. Then right-click the tab of the sheet, choose and confirm with .
Conditional formatting is a useful feature to highlight certain values in
your spreadsheet. For example, define a condition and if the condition is
true, a style is applied to each cell that fulfills this
condition.
Before you apply conditional formatting, choose › › . You should see a check mark in front of .
Define a style first. This style is applied to each cell when your
condition is true. Use › or press F11. For more information, see
Procedure 12.1, “Creating a Style”. Confirm with
.
Select the cell range where you want to apply your condition.
Select › › from the menu. A dialog opens.
You now see a template for a new condition. Conditions can operate in multiple modes:
The condition tests if a cell matches a certain value. Next to the first drop-down box, select an operator such as , , or .
The condition tests if a certain formula returns
true.
The condition tests if a certain date value is reached.
This mode allows creating data visualizations that depend on the value of a cell, similarly to . However, with , you can use one condition to apply an entire range of styles.
The types of styles that can be used are color scales (cell background color), data bars (bars with changing width in the cell) and icon sets (an icon in the cell).
For example, a color scale allows assigning 0
a black background and 100 a green background.
All values in between are calculated automatically. For
example, 50 receives a dark green background.
For this example, keep the default: .
Select an operator and the value of the cell you want to test for.
Choose the style you want to apply when this condition is
true or click to define a
new appearance.
If you need additional conditions, click . Then repeat the previous steps.
Confirm with . Now the style of your cells has changed.
Grouping a cell range allows hiding parts of a spreadsheet. This makes spreadsheets more readable, as you can hide all the parts you are not currently interested in. It is possible to group rows or columns and nest groups in other groups.
To group a range, proceed as follows:
Select a cell range in your spreadsheet.
Select › › . A dialog appears.
Decide if you want to group your selected range by rows or by columns. Confirm with .
After grouping selected cells, a line indicating the grouped cell range appears in the upper-left margin. Fold or unfold the cell range with the and icons. The numbers at the top left of the margins display the depth of your groups and can be clicked too.
To ungroup a cell range, click into a cell which belongs to a group and select › › . The line in the margin disappears. The innermost group is always deleted first.
If you have a spreadsheet with lots of data, scrolling usually makes the header disappear. LibreOffice can lock rows or columns or both, so they remain fixed as you scroll around.
To freeze a single row or a single column, proceed as follows:
To create a frozen area before a row, click the header of the row
(1, 2, 3, ...).
Alternatively, to create a frozen area above a column, click the header of
the column (A, B,
C, ...).
Select › . A dark line appears, indicating the frozen area.
It is also possible to freeze both rows and columns:
Click into the cell to the right of the column and below the row you want frozen. For example, if your header occupies the space from A1 to B3, click cell C4.
Select › . A dark line appears, indicating which area is frozen.
To unfreeze, select › . The check mark before the menu item disappears.
Besides LibreOffice Writer and LibreOffice Calc, LibreOffice also includes the modules Impress, Base, Draw, and Math. With these you can create presentations, design databases, draw up graphics and diagrams, and create mathematical formulas.
Use LibreOffice Impress to create presentations for screen display or printing. If you have used other presentation software, Impress makes it easy to switch. It works very similarly to other presentation software.
There are multiple ways to create a new Impress document:
From Scratch. To create a new empty document, click › › .
Using a Wizard. To use a standard format and predefined elements for your documents use a wizard. Click › › and follow the steps.
From a Template. To use a template, click › › and open, for example, . From the list of presentation templates, select the one that fits your needs.
The following procedure describes how to create a presentation by using the wizard. Proceed as follows:
Start LibreOffice.
Select › › .
Choose . Select from the pop-up menu to set your preferred background and click .
Select an output medium. The output medium is the form the final presentation will take, such as: , , a slideshow on a 4:3 or a 16:9 , among other choices.
To see a thumbnail showing your choices, make sure is activated. If all options are set according to your wishes, click .
To use effects for slide transitions, select an and its . The effect will be previewed immediately.
Either use the default presentation type or choose to specify the amount of time each page displays and the length of the pause between presentations.
If all options are set according to your wishes, click .
The presentation opens, ready for editing.
Master pages give your presentation a consistent look by defining what fonts and other design elements are used. Impress uses two types of master pages:
Contains elements that appear on all slides. For example, you might want your company logo to appear in the same place on every slide. The slide master also determines the text formatting style for the heading and outline of every slide that uses that master page, as well as any information you want to appear in a header or footer.
Determines the formatting and appearance of the notes in your presentation.
Impress comes with a collection of preformatted master pages. To customize presentations further, create your own slide masters.
Start Impress with an existing presentation or create a new one as described in Section 13.1.1, “Creating a Presentation”.
Click › .
This opens the current slide master in . The toolbar appears.
Right-click the left-hand panel, then click .
Edit the slide master until it has the desired look.
Master view allows editing outline styles by directly formatting the sample text on the slide.
To finish editing slide masters, in the toolbar, click . Alternatively, choose › .
When you have created all of the slide masters you want to use in your presentations, you can save them in an Impress template. Then, any time you want to create presentations that use those slide masters, open a new presentation with your template.
Slide masters can be applied to selected slides or to all slides of a presentation.
Open your presentation.
(Optional) To apply a slide master to multiple slides but not all slides: Select the slides that you want a slide master applied to.
To select multiple slides, pressCtrl in the while clicking the slides you want to use.
In the Tasks pane, open the and click the master page you want to apply. The slide master is applied to the corresponding page or pages.
If you do not see the , click › .
LibreOffice includes the database module Base. Use Base to design a database to store many kinds of information. From a simple address book or recipe file to a sophisticated document management system.
Tables, forms, queries, and reports can be created manually or by using convenient wizards. For example, the Table Wizard contains several common fields for business and personal use. Databases created in Base can be used as data sources, such as when creating form letters.
It is beyond the scope of this document to detail database design with Base. Find more information at the sources listed in Section 10.11, “For More Information”.
Base comes with several predefined database fields to help you create a database. A wizard guides you through the steps to create a new database. The steps in this section are specific to creating an address book using predefined fields, but it should be easy to follow them to use the predefined fields for any of the built-in database options.
The process for creating a database can be broken into several subprocesses:
Start LibreOffice Base.
The starts.
You can choose between creating an HSQLDB or Firebird database.
This database format is also available in older versions of OpenOffice.org and LibreOffice. It depends on Java being installed on the computer.
This database format can only be used in newer versions of LibreOffice. It does not depend on Java. When you do large database operations, Firebird can perform better.
Proceed with .
Click to make your database information available to other LibreOffice modules and select the check boxes to and . Then click .
Browse to the directory where you want to save the database, specify a name for the database, then click .
After you have created the database, if you have selected the check box, the table wizard opens. If you have not, go to the area and click . Next, define the fields you want to use in your database table.
In this example, set up an address database.
For this example, click .
The list changes to show the predefined tables for personal use where the address table template is. The table templates listed under contain predefined business tables.
In the list, click .
The available fields for the predefined address book appear in the menu.
In the menu, click the fields you want to use in your address book.
Select one item at a time by clicking. Alternatively, to select multiple items, hold Shift and click each of the items separately.
Click the icons and to move selected items to or off the list.
To move all available fields to the menu, click the icon .
Use the icons and to adjust the order of the selected entries, then click .
The fields appear in the table and forms in the order in which they are listed.
Make sure each of the fields is defined correctly.
You can change the field name, type, maximum characters and whether it is a required field. For this example, leave the settings as they are, then click .
Make sure that and are activated. Additionally activate .
Proceed with .
Give the table a name, and activate .
Proceed with .
Next, create the form to use when entering data into your address book.
After the previous step, you should be in the already. Otherwise, open it by going to the main window. Under , right-click the correct table. Click .
In the , click the double right-arrow icon to move all available fields to the list, then click .
To add a subform, activate , then click .
For this example, accept the default selections.
Select how you want to arrange your form, then click .
Select and leave all of the check boxes deactivated, then click .
Apply a style and field border, then click .
For this example, accept the default selections.
Name the form, activate , then click .
After the form has been defined, you can modify the appearance of the form to suit your preferences.
After the previous step, you should be in the editor already. If not, select the right form by clicking in the side bar of the main window. Then, in the area, right-click the correct form. Select .
Arrange the fields on the form by dragging them to their new locations.
For example, move the field , so it appears to the right of the field .
When you have finished modifying the form, save it and close it.
After you have created your database tables and forms, you are ready to enter your data. You can also design queries and reports to help sort and display the data.
Refer to LibreOffice online help and other sources listed in Section 10.11, “For More Information” for additional information about Base.
Use LibreOffice Draw to create graphics and diagrams. You can export your drawings to the most common vector graphics formats and import them into any application that lets you import graphics, including other LibreOffice modules. You can also create Adobe* Flash* (SWF) versions of your drawings.
Start LibreOffice Draw.
Use the toolbar at the right side of the window to create a graphic. To create a new shape or text object, use the shape buttons of the toolbar:
To create a single shape or text object, click a shape button once. Then click and drag over the document to create an object.
To create a multiple shape or text object, double-click a shape button. Then click and drag over the document to create objects. When you are done, click the mouse pointer icon in the toolbar.
Save the graphic.
To embed an existing Draw graphic into a LibreOffice document, select › › . Select and click to navigate to the Draw file to insert.
To be able to edit the graphic later on its own, activate .
If you insert a file as OLE object, you can edit the object later by double-clicking it.
One particularly useful feature of Draw is the ability to open it from other LibreOffice modules, so you can create a drawing that is automatically imported into your document.
From a LibreOffice module (for example, from Writer), click › › › › .
The user interface of Writer will now be replaced by that of Draw.
Create your drawing.
Click in your document, outside the Draw frame.
The drawing is automatically inserted into your document.
It is usually difficult to include complex mathematical formulas in your documents. To make this task easier, the LibreOffice Math equation editor lets you create formulas using operators, functions, and formatting assistants. You can then save those formulas as an object that can be imported into other documents. Math functions can be inserted into other LibreOffice documents like any other graphic object.
Math is not a calculator. The functions it creates are graphical objects. Even if they are imported into Calc, these functions cannot be evaluated.
To create a formula, proceed as follows:
Start LibreOffice Math.
Click › › . The formula window opens.
Enter your formula in the lower part of the window. For example, the binomial theorem in LibreOffice Math syntax is:
(a + b)^2 = a^2 + 2 a b + b^2
The result is displayed in the upper part of the window.
Use the side bar panel or right-click the lower part of the window to insert other terms. If you need symbols, use › to, for example, insert Greek or other special characters.
Save the document.
The result is shown in Figure 13.1, “Mathematical Formula in LibreOffice Math”:
It is possible to include your formula in Writer, for example. To do so, proceed as follows:
Create a new Writer document or open an already existing one.
Select › › in the main menu. The window appears.
Select .
Click to locate your formula. To choose the formula file, click .
To be able to edit the formula later on its own, activate .
Confirm with . The formula is inserted at the current cursor position.
The Mozilla Firefox Web browser is included with openSUSE® Leap. With features like tabbed browsing, pop-up window blocking and download management, Firefox combines the latest browsing and security technologies with an easy-to-use interface. Firefox gives you easy access to different search engines to help you find the information you need.
Evolution makes storing, organizing, and retrieving your personal information easy, so you can work and communicate more effectively with others. It is a professional groupware program and an important part of the Internet-connected desktop.
Empathy is an instant messaging (IM) client that allows you to connect to multiple accounts simultaneously. Chat live with your contacts in one tabbed interface, regardless of which IM system they use. Empathy uses Telepathy for protocol support.
Ekiga is an application you can use for making phone calls via Voice over IP (VoIP), for video conferencing and for instant messaging.
The Mozilla Firefox Web browser is included with openSUSE® Leap. With features like tabbed browsing, pop-up window blocking and download management, Firefox combines the latest browsing and security technologies with an easy-to-use interface. Firefox gives you easy access to different search engines to help you find the information you need.
To start Firefox, select › › .
There are two ways to find information in Firefox: to search the Internet with a search engine, use the search bar. To search the page currently displayed, use the find bar.
Firefox has a search bar that can access different engines like Google,
Yahoo, or Amazon. For example, if you want to find information about SUSE
using the current engine, click in the search bar, type
SUSE, and press Enter. The
results appear in your window.
To choose a different search engine, type your search term, then click one of the search provider icons at the bottom of the appearing pop-up.
If you want to change the order, add, or delete a search engine, proceed as follows.
Click the icon to the left of the search bar.
From the pop-up, select . The dialog shows the engine that is currently set as default search engine and other available search engines.
To change the order of entries, use the mouse to drag them.
To delete an entry, select it and click .
To add a search engine, click . Firefox displays a Web page with available search plug-ins. To install a search plug-in, select it and click .
Some Web sites offer search engines that you can add directly to the
search bar. Whenever you are visiting such a Web site, the icon to the
left of the search bar gains a + sign. Click the icon
and select .
Firefox lets you define own keywords: abbreviations
to use as a URL shortcut for a particular search engine. If you have
defined ws as a keyword for the Wikipedia search for
example, you can type ws
SEARCHTERM into the location bar to
search Wikipedia for SEARCHTERM.
To assign a shortcut for a search engine from the search bar, click the icon to the left of the search bar and select . Select a search engine, double-click its column, enter a keyword and press Enter.
It is also possible to define a keyword for any search field on a Web site. Proceed as follows:
Right-click the search field and choose from the menu that opens. The dialog appears.
In , enter a descriptive name for this keyword.
Enter your for this search.
this keyword.
Using keywords is not restricted to search engines. You can also add a
keyword to a bookmark (via the bookmark's properties). For example, if
you assign suse to the SUSE home page bookmark, you
can open it by typing suse into the location bar.
To search inside a Web page, in the menu bar, click › or press Ctrl–F. The find bar opens. It is usually displayed at the bottom of a window. Type your query in the text box. Firefox finds the first occurrence of this phrase as you type. You can find other occurrences of the phrase by pressing F3 or the button in the find bar. Clicking the button will highlight all occurrences of the phrase. Checking the option makes the query case-sensitive.
Firefox also offers two quick-find options. Click anywhere you like to start a search on a Web page, type the key / immediately followed by the search term. The first occurrence of the search term will be highlighted as you type. Use F3 to find the next occurrence. It is also possible to limit quick-find to links only. This search option is available by typing the key '.
Bookmarks offer a convenient way of saving links to your favorite Web sites. Firefox not only makes it very easy to add new bookmarks with just one mouse click, it also offers multiple ways to manage large bookmark collections. You can sort bookmarks into folders, classify them with tags, or filter them with smart bookmark folders.
Add a bookmark by clicking the star in the location bar. The star will turn blue to indicate the page was bookmarked. The bookmark will be saved in the folder under the page title. To change the name and folder of your bookmark or add tags, after bookmarking, click the star again. This will open a pop-up where you can make your changes.
To bookmark all open tabs, right-click in a tab and choose . Firefox asks you to create a new folder for the tab links.
To remove a bookmark, open the bookmarked location. Then, click the star and click .
The can be used to manage the properties (name and address location) for each bookmark and organize the bookmarks into folders and sections. It resembles Figure 14.3, “The Firefox Bookmark Library”.
To open the , in the menu bar, click › . The library window is split into two parts: the left pane shows the folder tree view, the right pane the subfolders and bookmarks of the selected folder. Use to customize the right pane. The left pane contains three main folders:
Contains your complete browsing history. You cannot alter this list other than by deleting entries from it.
Lists bookmarks for each tag you have specified. See Section 14.4.2, “Tags” for more information on tags.
This category contains the three main bookmark folders:
Contains the bookmarks and folders displayed beneath the location bar. See Section 14.4.6, “The Bookmarks Toolbar” for more information.
Holds the bookmarks and folder accessible via the entry in the main menu or the bookmarks side menu.
Contains all bookmarks created with a single click the star in the location bar. This folder is only visible in the library and the bookmarks sidebar.
Organize your bookmarks using the right pane. Choose actions for folders or bookmarks either from the context menu that opens when you right-click an item or from the menu. The properties of a chosen folder or bookmark can be edited in the bottom part of the right pane. By default, only , , and are displayed for a bookmark. Click the arrow next to to gain access to all properties.
To rearrange your bookmarks, use the mouse to drag them. You can use this to move a bookmark or a folder to a different folder, or to change the order of bookmarks in a folder.
Tags offer a convenient way to file a bookmark under several categories.
You can tag a bookmark with as many terms as you want. For example, to
access all sites tagged with suse, enter
suse into the location bar. For each tag, an item is
automatically created in the Recent Tags folder of
the library. Drag and drop an item for a tag onto the bookmark toolbar to
easily access it.
To add tags to a bookmark, open the bookmark in Firefox and click the yellow star in the location bar. The dialog opens where you can add a comma separated list of tags. It is also possible to add tags via the bookmark's properties dialog which you can open in the library or by right-clicking a bookmark in the menu or the toolbar.
To import bookmarks from another browser or from a file in HTML format, open the library by choosing from the menu bar, › . To start the Import Wizard, click › and choose an import location. Start the import by clicking . Imports from an HTML file are imported as is.
Exporting bookmarks is also done via
in the library window. To save your bookmarks as an HTML file, choose
. To create a backup of your
bookmarks, choose . Firefox uses a JavaScript
Object Notation file format (.json) for backups.
To restore a bookmark backup, click › . Then locate the backup you want to restore from.
Live Bookmarks display headlines in your bookmark menu and keep you up to date with the latest news. This enables you to save time with one glance at your favorite sites. Live bookmarks update automatically. Many sites and blogs support this format.
To create a Live Bookmark, look for orange buttons on Web sites that either
read RSS or consist of a dot and three nested quarter
circles. Click the icon. Usually, that will lead you to a page where all
the headlines of the page are displayed. On that page, choose
. A dialog opens in which to select the
name and location of your live bookmark. Confirm with
. This page also lets you choose alternative
applications to subscribe with, such as .
Smart bookmark folders are virtual bookmark folders that are dynamically updated. There are three smart bookmark folders: The links are available from your bookmarks toolbar. links and are located in the bookmarks menu.
The Bookmarks Toolbar is displayed beneath the location
bar and lets you quickly access bookmarks. You can also add, organize, and
edit bookmarks directly. By default, the Bookmarks
Toolbar is populated with a predefined set of bookmarks organized
into several folders (see Figure 14.1, “The Browser Window of Firefox”).
To manage the Bookmarks Toolbar you can use the library
as described in Section 14.4.1, “Organizing Bookmarks”. Its
content is located in the folder . It
is also possible to manage the toolbar directly. To add a folder, bookmark,
or separator, right-click an empty space in the toolbar and select the
appropriate entry from the pop-up menu. To add the current page to the bar,
click the icon of the Web page in the location bar and drag it to the
desired position on the bookmarks toolbar. Hovering over an existing
bookmark folder will automatically open it, enabling you to place the
bookmark within this folder.
To manage a certain folder or bookmark, right-click it. A menu opens which lets you it or change its . To move or copy an entry, choose or and it to the desired position.
Keep track of your current and past downloads with the download manager. To start the download manager, in the menu bar, click › . While downloading a file, a progress bar indicates the download status. If necessary, pause the download and resume it later. To open a downloaded file with the associated application, click . To open the location to which the file was saved, choose . only deletes the entry from the download manager, however, it does not delete the file from the hard disk.
By default, all files are downloaded to ~/Downloads. To
change this behavior, in the menu bar, click › . Go to
. Under , either
choose another location or .
If your browser crashes or is closed while downloading, all pending downloads will automatically be resumed in the background when starting Firefox the next time. A download that was paused before the browser was closed can manually be resumed via the download manager.
Since browsing the Internet has become more risky, Firefox offers various measures to make browsing safer. It automatically checks whether you are trying to access a site known to contain harmful software (malware) or a site known to steal sensitive data (phishing) and stops you from entering these sites. The Instant Web Site ID lets you easily check a site's legitimacy, and a password manager and the pop-up blocker offer additional security. With Private Browsing, you can surf the Internet without Firefox recording data on your computer.
Firefox allows you to check the identity of a Web page with a single glance. The icon in the location bar next to the address indicates which identity information is available and whether communication is encrypted:
The site does not provide any identity information and communication between Web server and browser is not encrypted. Do not exchange sensitive information with such sites.
This site is from a domain that has been verified by a certificate, so you can be sure that you are really connected to the very site it claims to be. However, the site tried to load additional elements, such as images or scripts over an insecure connection. Firefox has blocked these items. Therefore, the page can look broken.
This site is from a domain that has been verified by a certificate, so you can be sure that you are really connected to the very site it claims to be. Communication with a “gray-padlock” site is always encrypted.
This site completely identifies itself by a certificate that ensures a site is owned by the person or organization it claims to be. This is especially important when exchanging very sensitive data (for example when doing money transactions over the Internet). In this case you can be sure to be on the Web site of your bank when it sends complete identity information. Communication with a “green-padlock” server is always encrypted.
To view detailed identity information, click the icon of the Web site in the location bar. In the opening pop-up, click to open the window. Here, you can view the site's certificate, the encryption level, and information about stored passwords and cookies.
With the view you can set per-site permissions for image loading, pop-ups, cookies and installation permissions. The view lists all images, background graphics and embedded objects from a site and displays further information on each item together with a preview. It also lets you save individual items.
Firefox comes with a certificate store for identifying certificate authorities (CA). Using these certificates enables the browser to automatically verify certificates issued by Web sites. If a Web site issues a certificate that has not been signed by one of the CAs from the certificate store, it is not trusted. This ensures that no spoofed certificates are accepted.
Large organizations usually use their own certificate authorities in-house
and distribute the respective certificates via the system-wide certificate
store located at /etc/pki/nssdb. To configure Firefox
(and other Mozilla tools, such as Thunderbird) to use this system-wide CA
store in addition to its own, export the NSS_USE_SHARED_DB
variable. For example, you can add the following line to
~/.bashrc:
export NSS_USE_SHARED_DB=1
Alternatively or additionally you can manually import certificates. To do so, in the menu bar, open the dialog by clicking › . Select › › › › and select the certificate to import. Only import certificates you absolutely trust!
Each time you enter a user name and a password on a Web site, Firefox offers to store this data. A pop-up at the top of the page opens, asking you whether you want Firefox to remember the password. If you accept by clicking , the password will be stored on your hard disk in an encrypted format. The next time you access this site, Firefox will automatically fill in the login data.
To review or manage your passwords, open the password manager by clicking › › › in the menu bar. The password manager opens with a list of sites and their corresponding user names. By default, the passwords are not displayed. You can click to display them. To delete single or all entries from the list, click or , respectively.
To protect your passwords from unauthorized access, you can set a master password that is required when managing or adding passwords. In the menu bar, click › , choose the category and activate .
By default, Firefox keeps track of your browsing history by storing content and links of visited Web sites, cookies, downloads, passwords, search terms and formula data. Collecting and storing this data makes browsing faster and more convenient. However, when you use a public terminal or a friend's computer, for example, you could turn this behavior off. In Private Browsing mode Firefox will not keep track of your browsing history nor will it cache the content of pages you have visited.
To enable the Private Browsing mode, in the menu bar, click
› . The current Web site and all open tabs will
be replaced by the Private Browsing information screen. As long as you will
browse in private mode, the string (Private Browsing)
will be displayed in the titlebar of the window.
Disable Private Browsing by closing the private window.
To make Private Browsing the default mode, open the tab in the Preference window as described in Section 14.7.1, “Preferences”, set the option to and then choose .
Downloads and bookmarks you made during Private Browsing mode will be kept.
Firefox can be customized extensively.
Change the way Firefox behaves by altering its preferences.
Add functionality by installing extensions.
Change the look and feel by installing themes.
To manage extensions, themes and plug-ins, Firefox has an add-on manager.
Firefox offers a wide range of configuration options. These are available by choosing › in the menu bar. Each option is described in detail in the online help, which can be accessed by clicking the question mark icon in the dialog.
By default, Firefox automatically restores your session—windows and tabs—only after it has crashed, or after a restart because of an extension. However, it can be configured to restore a session every time it is started: Open the Preferences dialog as described in Section 14.7.1, “Preferences” and go to the category . Set the option to .
When you have multiple windows open they will only be restored the next time when you close all of them at once with › (from the menu bar) or with Ctrl–Q. If you close the windows one by one, only the last window will be restored.
When sending a request to a Web server, the browser always sends the information about which language is preferred by the user. Web sites that are available in more than one language (and are configured to evaluate this language parameter) will display their pages in the language the browser requests. On openSUSE Leap, the preferred language is preconfigured to use the same language as the desktop. To change this setting, open the window as described in Section 14.7.1, “Preferences”, go to the category and your preferred language.
By default, Firefox spell-checks what you type when typing into multiple-line text boxes. Misspelled words are underlined in red. To correct a word, right-click it and select the correct spelling from the context menu. You may also add the word to the dictionary, if it is correct.
To change or add a dictionary, right-click anywhere in a multi-line text box and select the appropriate option from the context menu. Here you may also disable spell-checking for this text box. If you want to globally disable spell checking, open the window as described in Section 14.7.1, “Preferences” and go to the category . Deactivate .
Extensions let you personalize Firefox to fit your needs. With extensions, you can change the look and feel of Firefox, enhance existing functionality, and add functions. For example, extensions can enhance the download manager, show the weather, or control Web music players. Other extensions assist Web developers or increase security by blocking content such as ads or scripts.
There are thousands of extensions available for Firefox. With the add-ons manager, you can install, enable, disable, update, and remove extensions.
If you do not like the standard look and feel of Firefox, install a new theme. Themes do not change the functionality, only the appearance of the browser.
To add an extension or theme, start the add-ons manager with › from the menu bar. It opens on the tab either displaying a choice of recommended add-ons or the results of your last search.
Use the field to search for specific add-ons. Click an entry in the list to view a short description. Install the add-on by clicking or open a Web page with detailed information by clicking the link.
To activate freshly installed extensions or themes, Firefox sometimes needs to be restarted by clicking in the add-ons manager. Restart this way to make sure that your browsing session will be restored.
The Add-ons Manager also offers a convenient interface to manage extensions, themes, and plug-ins. can be enabled, disabled or uninstalled. If an extension is configurable, its configuration options can be accessed via the button. In the tab you may a theme, or activate a different theme by clicking . Pending extension and theme installations are also listed. Select to stop the installation. Although you cannot install as a user, you may disable or enable them with the Add-ons manager.
Some add-ons require you to restart the browser when you uninstall or disable them. In such cases, after clicking either of these actions, a link appears in the add-ons manager.
Before you actually print a Web page, you can use the print preview function to control how the printed page will look like. From the menu bar, choose › . Configure paper size and orientation per printer with .
To print a Web page either choose, from the menu bar, › or press Ctrl–P. The dialog opens. To print with the default options click .
The Printer dialog also offers extensive configuration options to fine-tune the printout. On the tab, choose a printer, the range to print, the number of copies and the order. lets you specify the number of pages per side, the scaling factor, and paper source and type. If the printer supports it, you can also activate double-sided printing here. Control how frames, backgrounds, header and footer are printed on the tab.
To get more information about Firefox see the following links:
| Mozilla forums: https://www.mozilla.org/about/forums/ |
| Main Menu reference: http://support.mozilla.org/kb/Menu+reference |
| Preferences reference: http://support.mozilla.org/kb/Options+window |
| Keyboard shortcuts: http://support.mozilla.org/kb/Keyboard+shortcuts |
Evolution makes storing, organizing, and retrieving your personal information easy, so you can work and communicate more effectively with others. It is a professional groupware program and an important part of the Internet-connected desktop.
Evolution can help you work in a group by handling e-mail, contact information, and one or more calendars. It can do that on one or several computers, connected directly or over a network, for one person or for large groups.
Evolution helps you accomplish common daily tasks quickly. For example, you can easily reuse appointment or contact information sent to you by e-mail, or send e-mails to a contact or appointment. If you receive lots of e-mail, you can use advanced features like search folders, which let you save searches as though they were ordinary e-mail folders.
This chapter introduces you to Evolution and helps you get started. For more details, refer to the Evolution application help.
To start Evolution, click › › .
The first time you start Evolution, it opens an assistant to help you set up e-mail accounts and import data from other applications.
The helps you provide all the required information.
When the assistant starts, the page is displayed. Proceed to the page. If you previously backed up your Evolution configuration and want to restore it, activate the restoration option and select the backup file in the file chooser dialog.
Otherwise, proceed to .
The page is the next step in the assistant.
Type your full name in the field.
Type your e-mail address in the field.
(Optional) (Optional) Type an address in the field.
Only use this field if you want replies to e-mails from you to be sent to a different e-mail address.
(Optional) (Optional) Type your organization name in the field.
This is the company where you work, or the organization you represent when you send e-mails.
Proceed to the next page.
The page lets you determine the server that you want to use to receive e-mail.
You need to specify the type of server you want to receive mail from. If you are not sure about the type of server, contact your e-mail provider.
Select a server type in the list. The following is a list of available server types:
Exchange Web Services: Allows you to connect to newer Microsoft Exchange servers to synchronize e-mail, calendar, and contact information. This is only available if you have installed the connector for Microsoft* Exchange* which is packaged in evolution-ews .
IMAP+: Keeps the e-mail on your server, so you can access your e-mail from multiple systems.
POP: Downloads your e-mail to your hard disk for permanent storage, freeing up space on the e-mail server.
USENET News: Connects to a news server and downloads a list of available news digests.
Local Delivery: If you want to move e-mail from the spool and store it in your home directory, you need to provide the path to the mail spool you want to use. If you want to leave mail in your system’s spool files, select instead.
MH Format Mail Directories:
To download your e-mail using mh or an
mh-style program, you need to provide the path to the
mail directory you want to use.
Maildir Format Mail Directories: If you download your e-mail using Qmail or another Maildir-style program, select this option. You need to provide the path to the mail directory you want to use.
Standard Unix Mbox Spool File or Directory: To read and store e-mail in the mail spool on your local system, select this option. You need to provide the path to the mail spool you want to use.
None: If you do not plan to check e-mail with this account, select this option. There are no configuration options.
If you selected IMAP+, POP, or USENET News as the server type, you need to specify additional information.
If you are not sure about the correct server address, user name or security setting, contact your e-mail provider.
Type the host name of your e-mail server into the text box .
Type your user name for the account into the text box .
Choose a security setting supported by your mail server. For security reasons, avoid using .
Select your authentication type in the list. To have Evolution check for supported authentication types, click . Then choose one of the options without a strikeout.
Some servers do not announce the authentication mechanisms they support. Therefore clicking this button is not a guarantee that the shown mechanisms actually work.
Proceed to the next page.
If you selected Exchange Web Services as the server type, you need to specify additional information.
If you are not sure about the correct server address, user name or security setting, contact your e-mail provider.
Type your user name for the account into the text box .
Type the EWS URL of your e-mail server into the text box .
If available, type the address of an Offline Address Book into the text box .
If your login name and the name of your mailbox differ, select . Then type the mailbox name into the text box below.
Select an authentication type in the list. To have Evolution check for supported authentication types, click . Then choose one of the options without a strikeout.
Some servers do not announce the authentication mechanisms they support. Therefore clicking this button is not a guarantee that the shown mechanisms actually work.
Proceed to the next page.
If you selected , , , or , specify the path to the local files or directories in the path field.
After you have selected a mail delivery mechanism, you can set some preferences for its behavior.
If you selected IMAP+ as the receiving server type, you will now see a page of options to specify the behavior of Evolution.
You can choose from the following options:
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select if you want to check for new messages in all folders.
Select if you want to check for new messages in subscribed folders.
Select to use Quick Resync which makes browsing mail faster on supported servers.
Select if you want Evolution to listen for change notifications. If you activate this option, Evolution will show you mail as it arrives. Therefore, you can usually deactivate .
Select if you want Evolution to show only subscribed folders.
You can unsubscribe from folders to cut down on the number of irrelevant folders shown in Evolution and to reduce the amount of mail that is downloaded.
Select if you want to apply filters to new messages, and whether to do so in all folders or only in the Inbox folder.
Select if you want to check new messages for junk content, and whether to do so in all folders or only in the Inbox folder.
Select this to download all your mail, so you can read it offline.
Proceed to the next page.
If you selected POP as the receiving server type, you will now see a page of options to specify the behavior of Evolution.
You can choose from the following options:
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select if you want leave your mail on the server or delete it on the server when you download it to your computer. You can also set a period of time for which the messages will be kept on the server after they were downloaded.
Disabling POP3 extensions can help with old or misconfigured servers. Select if you have trouble receiving mail.
Proceed to the next page.
If you selected USENET News as the receiving server type, you will now see a page of options to specify the behavior of Evolution.
You can choose from the following options:
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select if you want to apply filters to new messages.
Abbreviate folder names, for example,
comp.os.linux appears as
c.o.linux.
Display only the name of the folder. For example, the folder
evolution.mail would appear as
evolution.
Proceed to the next page.
If you selected Exchange Web Services as the receiving server type, you will now see a page of options to specify the behavior of Evolution.
You can choose from the following options:
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select if you want to check for new messages in all folders.
Select if you want Evolution to listen for change notifications. If you activate this option, Evolution will show you mail as it arrives. Therefore, you can usually deactivate .
Select if you want to apply filters to new messages.
Select if you want to check new messages for junk content, and whether to do so in all folders or only in the Inbox folder.
Select this to download all your mail, so you can read it offline.
Set maximum time to wait for an answer from the server.
If you provided an OAB URL in the prior step, you can select caching an address book. This will make the address book available when offline.
Proceed to the next page.
If you selected that you want to receive mail through Local Delivery, you will now see a page of options to specify the behavior of Evolution.
Select if you want Evolution to automatically check for new mail. Set how often to check.
Proceed to the next page.
If you selected that you want to receive mail through MH-Format Mail Directories, you will now see a page of options to specify the behavior of Evolution.
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select to use the
.folders summary file.
Proceed to the next page.
If you selected that you want to receive mail through Maildir-Format Mail Directories, you will now see a page of options to specify the behavior of Evolution.
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select if you want to apply filters to new messages.
Proceed to the next page.
If you selected that you want to receive mail through a Unix mbox Spool File or Directories, you will now see a page of options to specify the behavior of Evolution.
Select if you want Evolution to automatically check for new mail. Set how often to check.
Select if you want to apply filters to new messages.
Select to store status headers in a way compatible with Elm, Pine, and Mutt.
Proceed to the next page.
Now that you have entered information about how you plan to receive mail, Evolution needs to know about how you want to send it. Usually, a separate server configuration is necessary for this. Otherwise, this page will be skipped.
Select a server type from the list.
The following server types are available:
Sendmail: Uses the Sendmail program to send mail from your system. Sendmail is more flexible, but is not as easy to configure, so you should select this option only if you know how to set up a Sendmail service.
SMTP: Sends mail using a separate mail server. This is the most common choice for sending mail. If you choose SMTP, there are additional configuration options.
Type the host address in the field.
If you are not sure what your host address is, contact your e-mail provider.
Select if your server requires authentication.
If you selected that your server requires authentication, you need to provide the following information:
Choose a security setting supported by your mail server. For security reasons, avoid using .
Select your authentication type in the list.
or
Click to have Evolution check for supported types. Then choose one of the options without a strikeout.
Some servers do not announce the authentication mechanisms they support. Therefore, clicking this button is not a guarantee that the shown mechanisms actually work.
Type your user name in the field.
Proceed to the next page.
Now that you have finished the e-mail configuration process, you need to give the account a name. The name can be any name you prefer. Type your account name on the field. Proceed to the next page and confirm your changes.
Depending on your configuration, you may now be asked for your e-mail passwords and whether you want to save them or want to always enter them when starting Evolution.
The Evolution main window will then open for the first time.
Now that the first-run configuration has finished, you are ready to begin using Evolution. This section sums up the most important parts of the user interface.
The menu bar gives you access to nearly all of the features of Evolution.
The folder list gives you a list of the available folders for each account. To see the contents of a folder, click the folder name. The contents are displayed in the e-mail list.
The toolbar gives you fast and easy access to the frequently used features in each component.
The search bar lets you search for e-mails. You can filter e-mails, contacts, and calendar entries and tasks using different criteria: a label, a search term, and an account or folder. The Search bar can also save frequently used searches to a search folder.
The message list displays a list of e-mails that you have received. To view an e-mail in the preview pane, select the e-mail.
The shortcut bar at the left lets you switch between folders and program components.
The statusbar periodically displays a message, or informs you about the progress of a task, such as sending e-mail.
On the far left, you will find the Online/Offline indicator. Click the Online/Offline indicator to switch between being using Evolution in online or offline mode.
The preview pane displays the contents of the e-mails that are selected in the e-mail list.
The shortcut bar is the column on the left side of the main window. At the top, there is a list of folders for the selected Evolution component. The buttons at the bottom are shortcuts to the individual components, such as Mail and Contacts.
The folder list organizes your e-mail, calendars, contact lists, and task lists in a tree. Most people find one to four folders at the base of the tree, depending on the component and their system configuration. Each Evolution component has at least one, called , for local information. For example, the folder list for the e-mail component shows all your e-mail accounts, local folders, and search folders.
If you receive large amounts of e-mail, you need additional ways to organize it. In Evolution, you can create own e-mail folders, address books, calendars, task lists, or memo lists.
To create a new folder:
Click › › .
Type the name of the folder in the field.
Select the location of the new folder.
Click .
Right-click a folder or subfolder to display a menu with the following options:
: Marks all the messages in the folder as read.
: Creates a new folder or subfolder in the same location.
: Copies the folder to a different location. When you select this item, Evolution offers a choice of locations to copy the folder to.
: Moves the folder to another location.
: Deletes the folder and all contents.
: Lets you change the name of the folder.
Refresh: Refreshes the folder.
Properties: Shows the number of total and unread messages in a folder.
You can also rearrange folders and messages by dragging and dropping them.
Any time new e-mail arrives in an e-mail folder, that folder label is displayed in bold text, along with the number of new messages in that folder.
The e-mail component of Evolution has the following standout features:
It supports multiple e-mail sources from many protocols.
It lets you guard your privacy with encryption.
It can speedily handle large amounts of e-mail.
Search folders allow you to come back to often-used searches.
Below is a summary of the user interface elements of the e-mail window.
The message list displays all the e-mails that you have. This includes all your read and unread messages and e-mail that is flagged to be deleted. With the drop-down box above the message you can filter the message list view using predefined and custom labels.
This is where your e-mail is displayed.
If you find the preview pane too small, you can resize the pane, enlarge the whole window, or double-click the message in the message list to have it open in a new window. To change the size of a pane, drag the divider between the two panes.
As with folders, you can right-click messages in the message list and get a menu of possible actions. This includes moving or deleting them, creating filters or search folders based on them, and marking them as junk mail.
Actions related to e-mail, like and , appear as buttons in the toolbar and are also located in the right-click menu.
Evolution allows you to create and edit message templates that you can use at any time to send mail with the same pattern.
To begin using the calendar, click in the shortcut bar. By default, the calendar shows today’s schedule on a ruled background. At the upper right, there is a list, where you can keep a list of tasks separate from your calendar appointments. Below that, there is a list for memos.
The appointment list displays all your scheduled appointments.
The month pane is a small view of a calendar month. You can also select a range of days in the month pane to display a custom range of days in the appointment list.
Tasks are distinct from appointments because they generally do not have times associated with them. You can see a larger view of your task list by clicking in the shortcut bar.
Memos, like Tasks, do not have times associated with them. You can see a larger view of your Memo list by clicking in the shortcut bar.
To use the contacts component, click in the shortcut bar. The Evolution contacts component can handle all of the functions of an address book or phone book.
It does, however, also do more than a paper book. To share your address book on a network, you can use LDAP directories. To create a new contact entry, right-click an e-mail address or double-click an empty space in the right pane. You can also search contacts using the search bar.
By default, the display shows all your contacts in alphabetical order, in a card-based view. You can select other views from the menu.
Get more information about Evolution from the application help available via F1.
Find more information on the project home page https://wiki.gnome.org/Apps/Evolution.
Empathy is an instant messaging (IM) client that allows you to connect to multiple accounts simultaneously. Chat live with your contacts in one tabbed interface, regardless of which IM system they use. Empathy uses Telepathy for protocol support.
Empathy supports the following instant messaging protocols: Google Talk (Jabber/XMPP), MSN, IRC, Salut, AIM, Facebook, Yahoo!, Gadu Gadu, Groupwise®, ICQ and QQ. (The supported protocols depend on installed Telepathy Connection Manager components.)
In the following, learn how to set up Empathy and how to communicate with your contacts.
To start Empathy, select › › .
To use Empathy, you must already have an account for the messaging service you want to use. For example, to use Empathy to chat via AIM, you must first have an AIM account.
To start Empathy, select › › .
If you start Empathy for the first time, a message appears, prompting you to configure an account.
Enter your account data. The dialog shows the accounts that have been configured so far.
To add another account:
In the dialog, click the plus icon.
Choose the type of account you want to configure, enter your user ID and password for the account and click . The dialog to add or modify accounts differs for each type of account, depending on what setup options are available for that account.
To enter or modify connection data for an account:
Select the account and click › .
Enter a server name and a port to use for the connection. Specify additional parameters, such as encryption options, if necessary. If you are unsure which parameters to use, refer to your messaging service.
Click to confirm your changes.
To go online with your account, turn the account switch on. When prompted for your password, enter it.
To disable the account, turn the switch off. If you are finished with the configuration of your accounts, close the dialog.
Use the to manage your contacts. You can add and remove contacts and organize them in groups, so they are easy to find.
To add a contact, click › .
Select the for which you want to add a contact.
As , enter the name or user ID of the person you want to add.
By default, will show the same entry, but you can enter a different name or nickname for the contact person here.
As soon as you start typing into the text box, the dialog will also show any groups that you have already defined.
To add the new contact to a group, activate the respective group's check box.
To create a new group, type a group name into the text box next to and click .
Click to confirm your changes and to close the dialog.
In case the groups or the newly added contacts are not displayed in the , check the Empathy preferences by clicking › . Activate and to make all contacts and groups appear in the .
To remove a contact from the list, right-click the name of that contact, select and confirm your choice.
To chat with other participants, you need to be connected to the Internet. After a successful login, you are usually marked as in the , and thus visible to others. To change your status, click the drop-down box at the top of the and select another option.
To open a chat session, double-click a contact name in the . The chat screen opens. Type your message, then press Enter to send.
If you open more than one chat session, the new session appears as a tab in the existing chat window. To see all messages of a session and to be able to write a reply, click the tab of that session. To see multiple session side by side, use the mouse to drag a tab out of the window. A second window will open.
To close a chat session, close the tab or window for it.
This chapter explained the Empathy options you need to know about to set up Empathy and communicate with your contacts. It does not explain all features and options available. For more information, open Empathy, then click .
For updates about new features and for the latest information, refer to the home page of the project at https://wiki.gnome.org/Apps/Empathy.
Before proceeding, make sure that the package ekiga is installed.
Before starting, make sure that the following requirements are met:
Your sound card is properly configured.
A headset or a microphone and speakers are connected to your computer.
For dialing in to regular phone networks, a SIP account is required. SIP (Signaling protocol for Internet Telephony) is the protocol used to establish sessions for audio and video conferencing or call forwarding.
There are many VoIP providers all over the world. One provider is the Ekiga project itself, go to https://ekiga.im to learn more.
For video conferencing: A Web cam is connected to your computer.
Start Ekiga by clicking › › .
On first start, Ekiga opens a configuration assistant that requests all data needed to configure Ekiga. Proceed as follows:
Click .
Enter your full name (name and surname). Click .
Enter your ekiga.net account data or choose not to
register with http://www.ekiga.net. Click
.
Enter your Ekiga Call Out Account data or choose not to register with http://www.ekiga.net. Click .
Set your connection type and speed. Click .
Configure the audio devices to use by choosing the audio ringing, output and input device driver. In general, you can keep the setting. Click .
Choose a video input device, if available. Click .
Check the summary of your settings and apply them.
If registration fails after making changes to your configuration, restart Ekiga.
Ekiga allows you to maintain multiple accounts. To configure an additional account, proceed as follows:
Open › .
Choose › . If you are unsure, select .
Enter the to which you have registered. This is usually an IP address or a host name that will be given to you by your Internet Telephony Service Provider. Enter , and according to the data provided by your provider.
Make sure is activated and leave the
configuration dialog with . The account is displayed
in the Ekiga main window, including its , which
should change to Registered.
The user interface has different modes. To switch between views, use the toolbar. The first mode is , the second is and the last one is . Click the camera icon to open the . It displays images from your local Web cam (or from a remote Web cam during a call).
By default, Ekiga opens in the mode. This view shows you a local address book which lets you quickly open connections to often-used numbers.
Many of the functions of Ekiga are available with key combinations. Table 17.1, “Key Combinations for Ekiga” summarizes the most important ones.
|
Key Combination |
Description |
|---|---|
|
Ctrl–O |
Initiate a call with the current number. |
|
Esc |
Hang up. |
|
Ctrl–N |
Add a contact to your address book. |
|
Ctrl–B |
Open the dialog. |
|
H |
Hold the current call. |
|
T |
Transfer the current call to another party. |
|
M |
Suspend the audio stream of the current call. |
|
P |
Suspend the video stream of the current call. |
|
Ctrl–W |
Close the Ekiga user interface. |
|
Ctrl–Q |
Quit Ekiga. |
|
Ctrl–E |
Start the account manager. |
|
Ctrl–J |
Activate on the main user interface. |
|
Ctrl–+ |
Zoom in to the picture from the Web cam. |
|
Ctrl–- |
Zoom out on the picture from the Web cam. |
|
Ctrl–0 |
Return to the normal size of the Web cam display. |
|
F11 |
Use full screen for the Web cam. |
After Ekiga is properly configured, making a call is easy.
Switch to the mode.
Enter the SIP address of the party to call at the bottom of the window. The address should look like:
for direct local calls: sip:username@domainname or
username@hostname
sip:username@domainname or
userid@sipserver
Click or press Ctrl–O and wait for the other party to pick up the phone.
To end the call, click or press Esc.
If you need to tweak the sound parameters, click › .
Ekiga can receive calls in two different ways. First, it can be called
directly with sip:user@host, or via SIP provider. Most
SIP providers enable you to receive calls from a normal land-line to your
VoIP account. Depending on the mode in which you use Ekiga, there are
several ways in which you are alerted to an incoming call:
Incoming calls can only be received and answered if Ekiga is already started. You can hear the ring sound on your headset or your speakers. If Ekiga is not started, the call cannot be received.
Normally, the Ekiga panel applet runs silently without giving any notice of its existence. This changes when a call comes in. The main window of Ekiga opens and you hear a ringing sound on your headset or speakers.
Once you have noticed an incoming call, click to answer the call then start talking. If you do not want to accept this call, click . It is also possible to transfer the call to another SIP address.
Ekiga can manage your SIP contacts. All of the contacts are displayed in the tab, shown in the main window after start-up. To add a contact or a new contact group, select › .
If you want to add a new group, enter the group name into the bottom text box and click . The new group is then added to the group list and preselected.
The following entries are required for a valid contact:
Enter the name of your contact. This may be a full name, but you can also use a nickname here.
Enter a valid SIP address for your contact.
If you have many contacts, add your own groups.
To call a contact from the address book, double-click the contact. The call is initiated immediately.
The official home page of Ekiga is http://www.ekiga.org/. This site offers answers to frequently asked questions and more detailed documentation.
For information about the support of the H323
teleconferencing protocol in Linux, see
http://www.voip-info.org/wiki/view/H.323. This is also
a good starting point when searching for projects supporting VoIP.
To set up a private telephone network, you might be interested in the
PBX software Asterisk
http://www.asterisk.org/. Find information about it at
http://www.voip-info.org/wiki-Asterisk.
GIMP (the GNU Image Manipulation Program) is a program for creating and editing raster graphics. In most aspects, its features are comparable to those of Adobe* Photoshop* and other commercial programs. Use it to resize and retouch photographs, design graphics for Web pages, create covers for your custom CDs, or almost any other graphics project. It meets the needs of both amateurs and professionals.
GNOME Videos is the default movie player. GNOME Videos provides the following multimedia features:
Brasero is a GNOME program for writing data and audio CDs and DVDs. Start the program from the main menu by clicking › › .
The following sections are a quick introduction on how to create your own CD or DVD.
GIMP (the GNU Image Manipulation Program) is a program for creating and editing raster graphics. In most aspects, its features are comparable to those of Adobe* Photoshop* and other commercial programs. Use it to resize and retouch photographs, design graphics for Web pages, create covers for your custom CDs, or almost any other graphics project. It meets the needs of both amateurs and professionals.
GIMP is an extremely complex program. Only a small range of features, tools, and menu items are discussed in this chapter. See Section 18.8, “For More Information” for ideas of where to find more information about the program.
There are two main types of digital graphics: raster and vector. GIMP is intended for working with raster graphics, which are most often used for digital photographs or scanned images.
Raster Images. A raster image is a collection of pixels: Small blocks of color that create an entire image when put together. High resolution images contain a large number of pixels. Because of this, such image files can easily become quite large. It is not possible to increase the size of a raster image without losing quality.
GIMP supports most common formats of raster graphics, like JPEG, PNG, GIF, BMP, TIFF, PSD, and more.
Vector Images. Unlike raster images, vector images do not store information about individual pixels. Instead, they use geometric primitives such as points, lines, curves, and polygons. Vector images can be scaled very easily. Depending on their content, vector image files can both be very small or very large. However, their file size is usually independent of their display size.
The disadvantage of vector images is that they are not good at representing complex images with many colors such as photographs. There are many specialized applications for vector graphics, for example Inkscape. GIMP has very limited support for vector graphics. For example, GIMP can open and rasterize vector graphics in SVG format or work with vector paths.
GIMP supports only the most common color spaces:
RGB images with 8 bits per channel. This equals 24 bits per pixel in RGB images without an alpha channel (transparency). With an alpha channel, that equals 32 bits per pixel.
Grayscale images with 8 bits per pixel.
Indexed images with up to 255 colors.
Many high-end digital cameras produce image files with color depths above 8 bits per channel. If you import such an image into GIMP, you will lose some color information. GIMP also does not support a CMYK color mode for professional printing.
To start GIMP, select › › .
By default, GIMP shows three windows. The toolbox, an empty image window with the menu bar, and a window containing several docked dialogs. The windows can be arranged on the screen as you need them. If they are no longer needed, they can also be closed. Closing the image window when it is empty quits the application.
In the default configuration, GIMP saves your window layout when you quit. Dialogs left open reappear when you next start the program.
If you want to combine all windows of GIMP, activate › .
If there is currently no image open, the image window is empty, containing only the menu bar and the drop area, which can be used to open any file by dragging and dropping it there. Every new, opened, or scanned image appears in its own window. If there is more than one open image, each image has its own image window. There is always at least one image window open.
In Single-Window Mode, all image windows are accessible from a tab bar at the top of the window.
The menu bar at the top of the window provides access to all image functions. You can also access the menu by right-clicking the image or clicking the small arrow button in the top left corner of the rulers.
The menu offers the standard file operations, such as , , , and . quits the application.
With the items in the menu, control the display of the image and the image window. opens a second display window of the current image. Changes made in one view are reflected in all other views of that image. Alternate views are useful for magnifying a part of an image for manipulation while seeing the complete image in another view. Adjust the magnification level of the current window with . When is selected, the image window is resized to fit the current image display exactly.
The toolbox contains drawing tools, a color selector, and a freely configurable space for options pages. If you accidentally close the toolbox, you can reopen it by clicking › .
To find out what a particular tool does, hover over its icon. At the very top, there is a drop area which can be used to open any image file by simply dragging and dropping it there.
The current foreground and background color are shown in two overlapping boxes. The default colors are black for the foreground and white for the background. Swap the foreground and background color with the bent arrow icon to the upper right of the boxes. Use the black and white icon to the lower left to reset the colors to the default. Click the box to open a color selection dialog.
Under the toolbox, a dialog shows options for the currently selected tool. If it is not visible, open it by double-clicking the icon of the tool in the toolbox.
shows the different layers in the current image and can be used to manipulate the layers. Information is available in Section 18.6.6, “Layers”.
shows the color channels of the current image and can manipulate them.
Paths are a vector-based method of selecting parts of an image. They can also be used for drawing. shows the paths available for an image and provides access to path functions. shows a limited history of modifications made to the current image. Its use is described in Section 18.6.5, “Undoing Mistakes”.
Although GIMP can be a bit overwhelming for new users, most quickly find it easy to use after they work out a few basics. Crucial basic functions are creating, opening, and saving images.
To create a new image, select › . This opens a dialog in which you can make settings for the new image.
If desired, select a predefined setting called a .
To create a custom template, select › › and use the controls offered by the window that opens.
In the section, set the size of the image to create in pixels or another unit. Click the name of the unit to select another unit from the list of available units.
(Optional) To set a different resolution, click , then change the value for .
The default resolution of GIMP is usually 72 pixels per inch. This corresponds to a common screen display and is sufficient for most Web page graphics. For print images, use a higher resolution, such as 300 pixels per inch.
In , select whether the image should be in color () or . For detailed information about image types, see Section 18.6.7, “Image Modes”.
In select the color the image is filled with. You can choose between and set in the toolbox, or for a transparent image. Transparency is represented by a gray checkerboard pattern.
When the settings meet your needs, click .
To open an existing image, select › .
In the dialog that opens, select the desired file and click .
GIMP makes a distinction between saving and exporting images.
Saving an Image. The image is stored with all its properties in a lossless format. This includes, for example, layer and path information. This means that repeatedly opening and saving the image will neither degrade its quality nor how well it can be edited.
To save an image, use › or › . To be able to store all properties, only the native format of GIMP is allowed in this mode: the XCF format.
Exporting an image. The image is stored in a format in which some properties can be lost. For example, most image formats do not support layers. When exporting, GIMP will often tell you which properties will be lost and ask you to decide how to proceed.
To export an image, use › or › . Below is a selection of the most common file formats that GIMP can export to:
A common format for photographs and Web page graphics without transparency. Its compression method enables reduction of file sizes, but information is lost when compressing. It may be a good idea to use the preview option when adjusting the compression level. Levels of 85% to 75% often result in an acceptable image quality with reasonable compression. Repeatedly opening a JPEG and then saving can quickly result in poor image quality.
Although very popular in the past for graphics with transparency, GIF is less often used now. GIF is also used for animated images. The format can only save indexed images. See Section 18.6.7, “Image Modes” for information about indexed images. The file size can often be quite small if only a few colors are used.
With its support for transparency, lossless compression, and good browser support, PNG is the preferred format for Web graphics with transparency. An added advantage is that PNG offers partial transparency, which is not offered by GIF. This enables smoother transitions from colored areas to transparent areas (antialiasing). It also supports the full RGB color space which makes it usable for photos. However, it cannot be used for animations.
GIMP provides several tools for making changes to images. The functions described here are those most interesting for smaller edits.
After an image is scanned or a digital photograph is loaded from the camera, it is often necessary to modify the size for display on a Web page or for printing. Images can easily be made smaller either by scaling them down or by cutting off parts of them.
Enlarging an image is much more problematic. Because of the nature of raster graphics, quality is lost when an image is enlarged. It is recommended to keep a copy of your original image before scaling or cropping.
Select the crop tool from the toolbox (the paper knife icon) or click › › .
Click a starting corner and drag to outline the area to keep. A rectangle showing the crop area will appear.
To adjust the size of the rectangle, move your mouse pointer above any of the rectangle's sides or corners, then click and drag to resize as desired. If you want to adjust both width and height of the rectangle, use a corner. To adjust only one dimension, use a side. To move the whole rectangle to a different position without resizing, click anywhere near its center and drag to the desired position.
When you are satisfied with the crop area, click anywhere inside to crop the image or press Enter. To cancel the cropping, click anywhere outside the crop area.
Select › to change the overall size of an image.
Select the new size by entering it in or .
To change the proportions of the image when scaling (this distorts the image), click the chain icon to the right of the fields to break the link between them. When those fields are linked, all values are changed proportionately. Adjust the resolution with and .
The option controls the quality of the resulting image. The default interpolation method usually is a good standard to use.
When you are finished, click .
The canvas is the entire visible area of an image. Canvas and image are independent from each other. If the canvas is smaller than the image, you can only see part of the image. If the canvas is larger, you see the original image with extra space around it.
Select › .
In the dialog that opens, enter the new size. To make sure the dimensions of the image stay the same, click the chain icon.
After adjusting the size, determine how the existing image should be positioned in comparison to the new size. Use the values or drag the box inside the frame at the bottom.
When you are finished, click .
It is often useful to perform an image operation on only part of an image. To do this, the part of the image with which you want to work must be selected. Areas can be selected using the selection tools available in the toolbox, using the quick mask, or combining different options. Selections can also be modified with the items under . The selection is outlined with a dashed line, called marching ants.
The main selection tools are easy to use. The more complicated paths tool is not described here.
To determine whether a new selection should replace, be added to, be subtracted from, or intersect with an existing selection, use the row in the tool options.
This tool can be used to select rectangular or square areas. To select an area with a fixed aspect ratio, width, height or size, activate the option and choose the relevant mode in the dialog. To create a square, hold Shift while selecting a region.
Use this to select elliptical or circular areas. The same options are available as with the rectangular selection. To create a circle, hold Shift while selecting a region.
With this tool, you can create a selection based on a combination of freehand drawing and polygonal segments. To draw a freehand line, drag the mouse over the image with the left mouse button pressed. To create a polygonal segment, release the mouse button where the segment should start and press it again where the segment should end. To complete the selection, hover the pointer above the starting point and click inside the circle.
This tool selects a continuous region based on color similarities. Set the maximum difference between colors in the tool options dialog in . By default, the selection is based only on the active layer. To base the selection on all visible layers, check .
With this tool, select all the pixels in the image with the same or a similar color as the clicked pixel. The maximum difference between colors can be set in the tool options dialog in . The important difference between this tool and Fuzzy Select is that works on continuous color areas while selects all pixels with similar colors in the whole image regardless of their position.
Click a series of points in the image. As you click, the points are connected based on color differences. Click the first point to close the area. Convert it to a regular selection by clicking inside it.
The tool lets you semi-automatically select an object in a photograph with minimal manual effort.
To use the tool, follow these steps:
Activate the tool by clicking its icon in the or choosing › › from the menu.
Roughly select the foreground object you want to extract. Select as little as possible from the background but include the whole object. At this point, the tool works like the tool.
When you release the mouse button, the deselected part of the image is covered with a dark blue mask.
Draw a continuous line through the foreground object going over colors which will be kept for the extraction. Do not paint over background pixels.
When you release the mouse button, the entire background is covered with a dark blue mask. If parts of the object are also masked, paint over them. The mask will adapt.
When you are satisfied with the mask, press Enter. The mask will be converted to a new selection.
The quick mask is a way of selecting parts of an image using the paint tools. A good way to use it is to first create a rough selection using the or tool. Then start using the :
To activate the , in the lower left corner of the image window, click the icon with the dashed box. The icon now changes to a red box.
The highlights the deselected parts of the image with a red overlay. Areas appearing in their normal color are selected.
To use a different color for displaying the quick mask, right-click the quick mask button then select from the menu. Click the colored box in the dialog that opens to select a new color.
To modify the selection, use the paint tools.
Painting with white selects the painted pixels. Painting with black deselects pixels. Shades of gray (colors are treated as shades of gray) create a partial selection. Partial selections allow a smooth transition between selected and deselected areas.
When you are finished, return to the normal selection view by clicking the icon in the lower left corner of the image window. The selection is then displayed with the marching ants.
Most image editing involves applying or removing color. By selecting a part of the image, you can limit where color can be applied or removed. When you select a tool and move the mouse pointer onto an image, the appearance of the mouse pointer changes to reflect the chosen tool.
With many tools, an icon of the current tool is shown along with the arrow. For paint tools, an outline of the current brush is shown, allowing you to see exactly where you will be painting in the image and how large of an area will be painted.
The GIMP toolbox always shows two color swatches. The foreground color is used by the paint tools. The background color is used much more rarely, but it can easily be switched to become the foreground color.
To change the color displayed in a swatch, click the swatch. A dialog with five tabs opens.
These tabs provide different color selection methods. Only the first tab, shown in Figure 18.2, “The Basic Color Selector Dialog”, is described here. The new color is shown in . The previous color is shown in .
The easiest way to select a color is by using the colored areas in the boxes to the left. In the narrow vertical bar, click a color similar to the desired color. The larger box to the left then shows available nuances. Click the desired color. It is then shown in .
The arrow button to the right of allows saving colors. Click the arrow to copy the current color to the history. A color can then be selected by clicking it in the history.
A color can also be selected by directly entering its hexadecimal color code in .
The color selector defaults to selecting a color by hue. To select by saturation, value, red, green, or blue, select the corresponding radio button to the right. The sliders and number fields can also be used to modify the currently selected color. Experiment a bit to find out what works best for you.
When you are finished, click .
To select a color that already exists in your image, use the eye dropper tool. With the tool options, set whether the foreground or background color should be selected.
To paint and erase, use the tools from the toolbox. There are a number of options available to fine-tune each tool. Pressure sensitivity options apply only when a pressure-sensitive graphics tablet is used.
The pencil, brush, airbrush, and eraser work much like their real-life equivalents. The ink tool works like a calligraphy pen. Paint by clicking and dragging. The bucket fill is a method of coloring areas of an image. It fills based on color boundaries in the image. Adjusting the threshold modifies its sensitivity to color changes.
To add text, use the text tool. Use the tool options to select the desired font and text properties. Click into the image, then start writing.
The text tool creates text in a special layer. To work with the image after adding text, read Section 18.6.6, “Layers”. When the text layer is active, it is possible to modify the text by clicking in the image to reopen the entry dialog.
The clone tool is ideal for retouching images. It enables you to paint in an image using information from another part of the image. If desired, it can instead take information from a pattern.
When retouching, use a small brush with soft edges. In this way, the modifications can blend better with the original image.
To select the source point in the image, press and hold Ctrl while clicking the desired source point. Then paint with the tool. When you move the cursor while painting, the source point, marked by a cross, moves as well.
If the is set to (the default setting), the source resets to the original when you release the left mouse button.
Images often need a little adjusting to get ideal print or display results.
Select › . A dialog opens for controlling the levels in the image.
Good results can usually be obtained by clicking . To make manual adjustments to all channels, use the dropper tools in to pick areas in the image that should be black, neutral gray, and white.
To modify an individual channel, select the desired channel in . Then drag the black, white, and middle markers in the slider in . You can also use the dropper tools to select points in the image that should serve as the white, black, and gray points for that channel.
If is checked, the image window shows a preview of the image with the modifications applied.
When you are finished, click .
Most modifications made in GIMP can be undone. To view a history of modifications, use the undo dialog included in the default window layout or open one from the image window menu with › › .
The dialog shows a base image and a series of editing changes that can be undone. Use the buttons to undo and redo changes. In this way, you can often work back to the base image.
You can also undo and redo changes using and from the menu. Alternatively, use the shortcuts Ctrl–Z and Ctrl–Y.
Layers are a very important aspect of GIMP. By drawing parts of your image on separate layers, you can change, move, or delete those parts without damaging the rest of the image.
To understand how layers work, imagine an image created from a stack of transparent sheets. Different parts of the image are drawn on different sheets. The stack can be arranged and sorted. Individual layers or groups of layers can shift position, moving sections of the image to other locations. New sheets can be added and others can be removed or made invisible.
Use the dialog to view the available layers of an image. The text tool automatically creates special text layers when used. The active layer is selected. The buttons at the bottom of the dialog offer several functions. More are available in the menu opened when a layer is right-clicked in the dialog. The two icon spaces before the image name are used for toggling image visibility (eye icon when visible) and for linking layers. Linked layers are marked with the chain icon and moved as a group.
GIMP has three image modes:
RGB is a normal color mode and is the best mode for editing most images.
Grayscale is used for black-and-white images.
Indexed mode limits the colors in the image to a set number. The maximum number of colors in this mode is 255. It is mainly used for GIF images.
If you need an indexed image, it is normally best to edit the image in RGB, then convert to indexed right before exporting. If you export to a format that requires an indexed image, GIMP offers to index the image when exporting.
GIMP includes a wide range of filters and scripts for enhancing images, adding special effects to them or making artistic manipulations. They are available in . Experimenting is the best way to find out what is available.
To print an image, select › from the image menu. If your printer is configured in the system, it should appear in the list. You can configure printing options on and tabs.
When you are satisfied with the settings, click . aborts printing.
The following resources are very useful for users of GIMP. They contain much more information about GIMP than this chapter. If you want to use GIMP for more advanced tasks, you should not miss these resources.
http://www.gimp.org is the official home page of The GIMP. News about GIMP and related software are regularly posted on the front page.
provides access to the internal help system
including the extensive GIMP User Manual. The package
gimp-help needs to be installed.
This documentation is also available online in HTML and PDF formats at
http://docs.gimp.org. Translations into many
languages are available.
A collection of many interesting GIMP tutorials is maintained at http://www.gimp.org/tutorials/. It contains basic tutorials for beginners and tutorials for advanced or expert users.
Printed books about GIMP are published regularly. You will find a selection of the best ones with short annotations at http://www.gimp.org/books/.
GIMP functionality can be extended with scripts and plug-ins. Many such scripts and plug-ins are distributed in the GIMP package, but others can be downloaded from the Internet. At http://registry.gimp.org/, you will find a database of GIMP scripts and plug-ins.
You can also use mailing lists or IRC channels to ask questions about GIMP. Always try to find answers in the documentation mentioned above or in mailing list archives before asking your question. The time of experienced users present on GIMP lists and channels is limited. Be polite and patient. It may take some time before your question is answered.
There are several mailing lists about GIMP. You will find them at http://www.gimp.org/mail_lists.html. The GIMP User list is the most appropriate place to ask user questions.
There is a whole IRC network dedicated to GIMP and GNOME desktop
environment—GIMPNet. You can connect to GIMPNet with your favorite
IRC client by pointing it at the irc.gimp.org server.
The #gimp-users channel is the right place to ask
question about using GIMP. If you want to listen to developer's
discussions, join the #gimp channel.
GNOME Videos is the default movie player. GNOME Videos provides the following multimedia features:
Support for a variety of video and audio files
A variety of zoom levels and aspect ratios, and a full screen view
Seek and volume controls
Playlists
Complete keyboard navigation
To start GNOME Videos, click › › .
When you start GNOME Videos, the following window is displayed.
Click › .
Select the files you want to open, then click
You can also drag a file from another application (such as a file manager) to the GNOME Videos window. GNOME Videos opens the file and plays the movie or song. GNOME Videos displays the title of the movie or song beneath the display area and in the titlebar of the window.
If you try to open a file format that GNOME Videos does not recognize, the application displays an error message and recommends a suitable codec.
You can double-click a video or audio file in GNOME Files to open it in the GNOME Videos window by default.
Click › .
Specify the URI location of the file you want to open, then click .
To play a DVD, VCD, or CD, insert the disc in the optical device of your computer, then click › .
To eject a DVD, VCD, or CD, click › .
To pause a movie or song that is playing, click the
button, or click › . When you pause a movie or
song, the statusbar displays and the time elapsed
on the current movie or song.
To resume playing a movie or song, click the
button, or click › .
To play or pause a movie, you can also press P.
To view properties of a movie or song, click › to make the sidebar appear. The dialog contains the title, artist, year, and duration of movie or song, video dimensions, codec, frame rate, and the audio bit rate.
To seek through movies or songs, use any of the following methods:
Click › . Alternatively, use ←.
Click › . Alternatively, use →.
Click › , or click the
button.
Click › , or click the
button.
To change the zoom factor of the display area, use any of the following methods:
Click › . Alternatively, press F.
To exit fullscreen mode, click or press Esc.
Click › › .
Click › › .
Click › › .
To switch between different aspect ratios, click › .
The default aspect ratio is .
To hide the window controls of GNOME Videos, click › and deselect the option. To show the controls on the GNOME Videos window, right-click the window, then select . If the Show Controls option is selected, GNOME Videos shows the menubar, time elapsed slider, seek control buttons, volume slider, and statusbar on the window. If the Show Controls option is not selected, the application hides these controls and shows only the display area.
To show the playlist, click › . The Playlist sidebar is displayed.
You can use the Playlist dialog to do the following:
To add a track or movie: Click the button. Select the file you want to add to the playlist, then click .
To remove a track or movie: Select the file names from the file name list box, then click .
To save a playlist to file: Click the button, then specify a file name.
To move a track or movie up the playlist: Select the file name from the file name list box, then click the button.
To move a track or movie down the playlist: Select the file name from the file name list box, then click the button.
To hide the playlist, click › , or click the button.
To enable or disable repeat mode, click › . To enable or disable shuffle mode, click › .
To choose the language of the subtitles, click › › , then select the subtitles language (DVD) or subtitle file (AVI etc.) you want to display.
To disable the display of subtitles, click › › .
By default, GNOME Videos chooses the same language for the subtitles that you use on your computer.
GNOME Videos automatically loads and displays subtitles if the file that
contains them has the same name as the video file. It supports the
following subtitle file extensions: srt,
asc, txt,
sub, smi, or
ssa.
To modify GNOME Videos preferences, click › .
The General Preferences let you select a network connection speed, specify if media files should be played from the last used position, and change the font and encoding used to display subtitles.
General Preferences include the following:
Lets you specify whether to start playing the movie from the last position.
Select network connection speed from the Connection speed drop-down box.
Lets you specify whether to load the subtitles automatically, and change the font and encoding used to display the subtitles.
The Display Preferences let you choose to automatically resize the window when a new video is loaded, change the color balance, and configure visual effects when an audio file is played.
Display Preferences include the following:
Select this option if you want GNOME Videos to automatically resize the window when a new video is loaded.
Select this option if you want GNOME Videos to automatically disable the desktop screen saver while an audio file is playing.
You can choose to show visual effects when an audio file is playing, select the type of visualization you want to show, and the visualization size.
Specify the level of color brightness, contrast, saturation, and hue.
After starting Brasero for the first time, the main window appears as shown in Figure 20.1.
To create a data CD or DVD, proceed as follows:
Click or select › › . The project view appears.
Drag and drop the desired directories or individual files either from your file manager or by clicking the plus icon. To show your directory structure directly in Brasero, select › or press F7.
Optionally, save the project under a name of your choice with › .
Name your medium. The original label is .
Choose the output medium from the pull down menu next to the button (CD/DVD or an ISO image file).
Click . A new dialog appears, depending on what medium you have selected in the previous step:
CD/DVD. You can define some parameters, like the burning speed or where to store temporary files. Under you can also choose whether to burn the image directly, close the session, verify the written data, and others.
ISO Image. Specify a file name for your ISO image file.
Start the process with .
There are no significant differences between creating an audio CD and creating a data CD. Proceed as follows:
Select › › .
Drag and drop the individual audio tracks to the project directory. The audio data must be in WAV or Ogg Vorbis format. Determine the sequence of the tracks by moving them up or down in the project directory.
Click . A dialog opens.
Specify a drive to write to.
Click to adjust burning speed and other preferences. When burning audio CDs, choose a lower burning speed to reduce the risk of burn errors.
Click .
To copy a CD or DVD, proceed as follows:
Click or go to › › . The dialog opens.
Specify the source drive you want to copy.
Specify a drive or image file to write to.
If necessary, change the burning speed, the temporary directory and other options in .
Click .
If you already have an ISO image, click or go to › › . Choose the image to write and a disc to write to. If necessary, change parameters by clicking . Choose the location of the image file with the pop-up menu labeled . Start the burning process and click .
Multisession discs can be used to write data in more than one burning session. This is useful, for example, for writing backups that are smaller than the media. In each session, you can add another backup file. One note of interest is that you are not only limited to data CDs or DVDs. You can also add audio sessions in a multisession disc.
To start a new multisession disc, do the following:
Start with a data disc first as described in Section 20.1, “Creating a Data CD or DVD”. You cannot start with an audio CD session. Make sure that you do not fill up the entire disc, because otherwise you cannot append a new session.
Click . The window opens.
Select to make the disc multisession capable. Configure other options if needed.
Start the burning session with .
You can find more information about Brasero at https://wiki.gnome.org/Apps/Brasero.
The help center of the GNOME desktop (Help) provides central access to the most important documentation resources on your system, in searchable form. These resources include online help for installed applications, man pages, info pages, and the SUSE manuals delivered with your product. Learn more in Section A.1, “Using GNOME Help”.
When installing new software with YaST, the software documentation is installed automatically, and usually appears in the help center of your desktop. However, some applications, such as GIMP, may have different online help packages that can be installed separately with YaST and do not integrate into the help centers.
/usr/share/doc
This traditional help directory holds various documentation files and the release notes for your system. Find more detailed information in Section 17.1, “Documentation Directory”.
When working with the shell, you do not need to know the options of the commands by heart. Traditionally, the shell provides integrated help by means of man pages and info pages. Read more in Section 17.2, “Man Pages” and Section 17.3, “Info Pages”.
On the GNOME desktop, to start Help directly from an application, either
click the button or press F1. Both
options take you directly to the application's documentation in the help
center. However, you can also start Help by opening a terminal end
entering yelp or from the main menu by clicking
› › .
To see an overview of available application manuals, click the menu icon and select .
The menu and the toolbar provide options for navigating the help center, for searching and for printing contents from Help. The help topics are grouped into categories presented as links. Click one of the links to open a list of topics for that category. To search for an item, click the search icon and enter the search string into the search field at the top of the window.
In addition to the
SUSE manuals installed under /usr/share/doc, you can
also access the product-specific manuals and documentation on the Web. For
an overview of all documentation available for openSUSE Leap check out your
product-specific documentation Web page at https://doc.opensuse.org/.
If you are searching for additional product-related information, you can also refer to the following Web sites:
You can also try general-purpose search engines. For example, use the search
terms Linux CD-RW help or LibreOffice file conversion
problem if you were having trouble with the CD burning or with
LibreOffice file conversion.
Apart from the product-specific help resources, there is a broad range of information available for Linux topics.
The Linux Documentation Project (TLDP) is run by a team of volunteers who write Linux-related documentation (see http://www.tldp.org). The set of documents contains tutorials for beginners, but is mainly focused on experienced users and professional system administrators. TLDP publishes HOWTOs, FAQs, and guides (handbooks) under a free license. Parts of the documentation from TLDP is also available on openSUSE Leap.
FAQs (frequently asked questions) are a series of questions and answers. They originate from Usenet newsgroups where the purpose was to reduce continuous reposting of the same basic questions.
Manuals and guides for various topics or programs can be found at http://www.tldp.org/guides.html. They range from Bash Guide for Beginners to Linux File System Hierarchy to Linux Administrator's Security Guide . Generally, guides are more detailed and exhaustive than HOWTOs or FAQs. They are usually written by experts for experts.
Wikipedia is “a multilingual encyclopedia designed to be read and edited by anyone” (see http://en.wikipedia.org). The content of Wikipedia is created by its users and is published under a dual free license (GFDL and CC-BY-SA). However, as Wikipedia can be edited by any visitor, it should be used only as a starting point or general guide. There is much incorrect or incomplete information in it.
There are various sources that provide information about standards or specifications.
The Linux Foundation is an independent nonprofit organization that promotes the distribution of free and open source software. The organization endeavors to achieve this by defining distribution-independent standards. The maintenance of several standards, such as the important LSB (Linux Standard Base), is supervised by this organization.
The World Wide Web Consortium (W3C) is one of the best-known standards organizations. It was founded in October 1994 by Tim Berners-Lee and concentrates on standardizing Web technologies. W3C promotes the dissemination of open, license-free, and manufacturer-independent specifications, such as HTML, XHTML, and XML. These Web standards are developed in a four-stage process in working groups and are presented to the public as W3C recommendations (REC).
OASIS (Organization for the Advancement of Structured Information Standards) is an international consortium specializing in the development of standards for Web security, e-business, business transactions, logistics, and interoperability between various markets.
The Internet Engineering Task Force (IETF) is an internationally active cooperative of researchers, network designers, suppliers, and users. It concentrates on the development of Internet architecture and the smooth operation of the Internet by means of protocols.
Every IETF standard is published as an RFC (Request for Comments) and is available free-of-charge. There are six types of RFC: proposed standards, draft standards, Internet standards, experimental protocols, information documents, and historic standards. Only the first three (proposed, draft, and full) are IETF standards in the narrower sense (see http://www.ietf.org/rfc/rfc1796.txt).
The Institute of Electrical and Electronics Engineers (IEEE) is an organization that draws up standards in the areas of information technology, telecommunication, medicine and health care, transport, and others. IEEE standards are subject to a fee.
The ISO Committee (International Organization for Standards) is the world's largest developer of standards and maintains a network of national standardization institutes in over 140 countries. ISO standards are subject to a fee.
The Deutsches Institut für Normung (DIN) is a registered technical and scientific association. It was founded in 1917. According to DIN, the organization is “the institution responsible for standards in Germany and represents German interests in worldwide and European standards organizations.”
The association brings together manufacturers, consumers, trade professionals, service companies, scientists and others who have an interest in the establishment of standards. The standards are subject to a fee and can be ordered using the DIN home page.
This appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
pam_apparmorauditctlausearchautraceaa-notify Message in GNOME/etc/pam.d/sshd)auth Section (common-auth)account Section (common-account)password Section (common-password)session Section (common-session)aa-unconfinedls -Zps Zaux/etc/audit/audit.logauditctl -sauditctl -lCopyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
This manual introduces the basic concepts of system security on openSUSE Leap. It covers extensive documentation about the authentication mechanisms available on Linux, such as NIS or LDAP. It deals with aspects of local security like access control lists, encryption and intrusion detection. In the network security part you learn how to secure computers with firewalls and masquerading, and how to set up virtual private networks (VPN). This manual shows how to use security software like AppArmor (which lets you specify per program which files the program may read, write, and execute) or the auditing system that collects information about security-relevant events.
Documentation for our products is available at http://doc.opensuse.org/, where you can also find the latest updates, and browse or download the documentation in various formats.
In addition, the product documentation
is usually available in your installed system under
/usr/share/doc/manual.
The following documentation is available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Several feedback channels are available:
To report bugs for openSUSE Leap, go to https://bugzilla.opensuse.org/, log in, and click .
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.com. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a concise
description of the problem and refer to the respective section number and
page (or URL).
The following notices and typographical conventions are used in this documentation:
/etc/passwd: directory names and file names
PLACEHOLDER: replace PLACEHOLDER with the actual value
PATH: the environment variable PATH
ls, --help: commands, options, and
parameters
user: users or groups
package name : name of a package
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
Commands that must be run with root privileges. Often you can also
prefix these commands with the sudo command to run them
as non-privileged user.
root #commandtux >sudocommand
Commands that can be run by non-privileged users.
tux >command
Notices
Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.
Important information you should be aware of before proceeding.
Additional information, for example about differences in software versions.
Helpful information, like a guideline or a piece of practical advice.
One of the main characteristics of a Linux or Unix system is its ability to handle several users at the same time (multiuser) and to allow these users to perform several tasks (multitasking) on the same computer simultaneously. Moreover, the operating system is network transparent. The users often do not know whether the data and applications they are using are provided locally from their machine or made available over the network.
With the multiuser capability, the data of different users must be stored separately, and security and privacy need to be guaranteed. Data security was already an important issue, even before computers could be linked through networks. Like today, the most important concern was the ability to keep data available in spite of a lost or otherwise damaged data medium (usually a hard disk).
This section is primarily focused on confidentiality issues and on ways to protect the privacy of users. But it cannot be stressed enough that a comprehensive security concept should always include procedures to have a regularly updated, workable, and tested backup in place. Without this, you could have a very hard time getting your data back—not only in the case of some hardware defect, but also in the case that someone has gained unauthorized access and tampered with files.
There are several ways of accessing data:
personal communication with people who have the desired information or access to the data on a computer
directly through physical access from the console of a computer
over a serial line
using a network link
In all these cases, a user should be authenticated before accessing the resources or data in question. A Web server might be less restrictive in this respect, but you still would not want it to disclose your personal data to an anonymous user.
In the list above, the first case is the one where the highest amount of human interaction is involved (such as when you are contacting a bank employee and are required to prove that you are the person owning that bank account). Then, you are asked to provide a signature, a PIN, or a password to prove that you are the person you claim to be. In some cases, it might be possible to elicit some intelligence from an informed person by mentioning known bits and pieces to win the confidence of that person. The victim could be led to reveal gradually more information, maybe without even being aware of it. Among hackers, this is called social engineering. You can only guard against this by educating people and by dealing with language and information in a conscious way. Before breaking into computer systems, attackers often try to target receptionists, service people working with the company, or even family members. Often such an attack based on social engineering is only discovered at a much later time.
A person wanting to obtain unauthorized access to your data could also use the traditional way and try to get at your hardware directly. Therefore, the machine should be protected against any tampering so that no one can remove, replace, or cripple its components. This also applies to backups and even any network cables or power cords. Also secure the boot procedure, because there are some well-known key combinations that might provoke unusual behavior. Protect yourself against this by setting passwords for the BIOS and the boot loader.
Serial terminals connected to serial ports are still used in many places. Unlike network interfaces, they do not rely on network protocols to communicate with the host. A simple cable or an infrared port is used to send plain characters back and forth between the devices. The cable itself is the weakest point of such a system: with an older printer connected to it, it is easy to record any data being transferred that way. What can be achieved with a printer can also be accomplished in other ways, depending on the effort that goes into the attack.
Reading a file locally on a host requires additional access rules than opening a network connection with a server on a different host. There is a distinction between local security and network security. The line is drawn where data must be put into packets to be sent somewhere else.
Local security starts with the physical environment at the location in
which computer is running. Set up your machine in a place where security
is in line with your expectations and needs. The main goal of local
security is to keep users separate from each other, so no user can
assume the permissions or the identity of another. This is a general
rule to be observed, but it is especially true for the user
root, who holds system
administration privileges.
root can take on the identity
of any other local user and read any locally-stored file without being
prompted for the password.
On a Linux system, passwords are not stored as plain text and the entered text string is not simply matched with the saved pattern. If this were the case, all accounts on your system would be compromised when someone got access to the corresponding file. Instead, the stored password is encrypted and, each time it is entered, is encrypted again and the two encrypted strings are compared. This only provides more security if the encrypted password cannot be reverse-computed into the original text string.
This is achieved by a special kind of algorithm, also called trapdoor algorithm, because it only works in one direction. An attacker who has obtained the encrypted string is not able to get your password by simply applying the same algorithm again. Instead, it would be necessary to test all the possible character combinations until a combination is found that looks like your password when encrypted. With passwords eight characters long, there are many combinations to calculate.
In the seventies, it was argued that this method would be more secure
than others because of the relative slowness of the algorithm used
which took a few seconds to encrypt one password. In the meantime, PCs have become powerful enough to do several hundred thousand
or even millions of encryptions per second. Because of this, encrypted
passwords should not be visible to regular users
(/etc/shadow cannot be read by normal users). It
is even more important that passwords are not easy to guess, in case
the password file becomes visible because of an error. Consequently, it
is not really useful to “translate” a password like
“tantalize” into “t@nt@1lz3”.
Replacing some letters of a word with similar looking numbers (like writing the password “tantalize” as “t@nt@1lz3”) is not sufficient. Password cracking programs that use dictionaries to guess words also play with substitutions like that. A better way is to make up a word that only makes sense to you personally, like the first letters of the words of a sentence or the title of a book, such as “The Name of the Rose” by Umberto Eco. This would give the following safe password: “TNotRbUE9”. In contrast, passwords like “beerbuddy” or “jasmine76” are easily guessed even by someone who has only some casual knowledge about you.
Configure your system so it cannot be booted from a removable device,
either by removing the drives entirely or by setting a BIOS password
and configuring the BIOS to allow booting from a hard disk only.
Normally, a Linux system is started by a boot loader, allowing you to
pass additional options to the booted kernel. Prevent others from using
such parameters during boot by setting an additional password for the
boot loader (see Section 12.2.6, “Setting a Boot Password” for
instructions). This is crucial to your system's security. Not only does
the kernel itself run with
root permissions, but it is
also the first authority to grant
root permissions at system
start-up.
As a general rule, always work with the most restrictive privileges
possible for a given task. For example, it is definitely not necessary
to be root to read or write
e-mail. If the mail program has a bug, this bug could be exploited for
an attack that acts with exactly the permissions of the program when it
was started. By following the above rule, minimize the possible damage.
The permissions of all files included in the openSUSE Leap
distribution are carefully chosen. A system administrator who installs
additional software or other files should take great care when doing
so, especially when setting the permission bits. Experienced and
security-conscious system administrators always use the
-l option with the command ls to
get an extensive file list, which allows them to detect any incorrect
file permissions immediately. An incorrect file attribute does not only
mean that files could be changed or deleted. These modified files could
be executed by root or, in
the case of configuration files, programs could use such files with the
permissions of root. This
significantly increases the possibilities of an attack. Attacks like
these are called cuckoo eggs, because the program (the egg) is executed
(hatched) by a different user (bird), similar to how a cuckoo tricks
other birds into hatching its eggs.
An openSUSE® Leap system includes the files
permissions,
permissions.easy,
permissions.secure, and
permissions.paranoid, all in the directory
/etc. The purpose of these files is to define
special permissions, such as world-writable directories or, for files,
the setuser ID bit (programs with the setuser ID bit set do not run
with the permissions of the user that has launched it, but with the
permissions of the file owner, usually
root). An administrator can
use the file /etc/permissions.local to add his own
settings.
To define which of the above files is used by openSUSE Leap's
configuration programs to set permissions, select in the section
of YaST. To learn more about the topic, read the comments in
/etc/permissions or consult the manual page of
chmod
(man chmod).
Special care must be taken whenever a program needs to process data that could be changed by a user, but this is more of an issue for the programmer of an application than for regular users. The programmer must make sure that his application interprets data in the correct way, without writing it into memory areas that are too small to hold it. Also, the program should hand over data in a consistent manner, using interfaces defined for that purpose.
A buffer overflow can happen if the actual size of a memory buffer is not taken into account when writing to that buffer. There are cases where this data (as generated by the user) uses up more space than what is available in the buffer. As a result, data is written beyond the end of that buffer area, which, under certain circumstances, makes it possible for a program to execute program sequences influenced by the user (and not by the programmer), rather than processing user data only. A bug of this kind may have serious consequences, especially if the program is being executed with special privileges (see Section 1.1.1.3, “File Permissions”).
Format string bugs work in a slightly different way, but again it is the user input that could lead the program astray. Usually, these programming errors are exploited with programs executed with special permissions—setuid and setgid programs—which also means that you can protect your data and your system from such bugs by removing the corresponding execution privileges from programs. Again, the best way is to apply a policy of using the lowest possible privileges (see Section 1.1.1.3, “File Permissions”).
Given that buffer overflows and format string bugs are related to the handling of user data, they are only exploitable if access has been given to a local account. Many of the bugs that have been reported can also be exploited over a network link. Accordingly, buffer overflows and format string bugs should be classified as being relevant for both local and network security.
Contrary to popular opinion, there are viruses that run on Linux. However, the viruses that are known were released by their authors as a proof of concept that the technique works as intended. None of these viruses have been spotted in the wild so far.
Viruses cannot survive and spread without a host on which to live. In
this case, the host would be a program or an important storage area of
the system (for example, the master boot record) that needs to be writable
for the program code of the virus. Because of its multiuser capability,
Linux can restrict write access to certain files (this is especially
important with system files). Therefore, if you did your normal work
with root permissions, you
would increase the chance of the system being infected by a virus. In
contrast, if you follow the principle of using the lowest possible
privileges as mentioned above, chances of getting a virus are slim.
Apart from that, you should never rush into executing a program from some Internet site that you do not really know. openSUSE Leap's RPM packages carry a cryptographic signature, as a digital label that the necessary care was taken to build them. Viruses are a typical sign that the administrator or the user lacks the required security awareness, putting at risk even a system that should be highly secure by its very design.
Viruses should not be confused with worms, which belong entirely to the world of networks. Worms do not need a host to spread.
Network security is important for protecting from an attack that is started outside the network. The typical login procedure requiring a user name and a password for user authentication is still a local security issue. In the particular case of logging in over a network, differentiate between the two security aspects. What happens until the actual authentication is network security and anything that happens afterward is local security.
As mentioned at the beginning, network transparency is one of the central characteristics of a Unix system. X, the windowing system of Unix operating systems, can use this feature in an impressive way. With X, it is no problem to log in to a remote host and start a graphical program that is then sent over the network to be displayed on your computer.
When an X client needs to be displayed remotely using an X server, the
latter should protect the resource managed by it (the display) from
unauthorized access. In more concrete terms, certain permissions must
be given to the client program. With the X Window System, there are two
ways to do this, called host-based access control and cookie-based
access control. The former relies on the IP address of the host where
the client should run. The program to control this is xhost. xhost
enters the IP address of a legitimate client into a database belonging
to the X server. However, relying on IP addresses for authentication is
not very secure. For example, if there were a second user working on
the host sending the client program, that user would have access to the
X server as well—like someone stealing the IP address. Because
of these shortcomings, this authentication method is not described in
more detail here, but you can learn about it with
man xhost.
In the case of cookie-based access control, a character string is
generated that is only known to the X server and to the legitimate
user, like an ID card of some kind. This cookie is stored on login in
the file .Xauthority in the user's home directory
and is available to any X client wanting to use the X server to display
a window. The file .Xauthority can be examined by
the user with the tool xauth. If you rename
.Xauthority, or if you delete the file from your
home directory by accident, you would not be able to open any new
windows or X clients.
SSH (secure shell) can be used to encrypt a network connection and forward it to an X server transparently. This is also called X forwarding. X forwarding is achieved by simulating an X server on the server side and setting a DISPLAY variable for the shell on the remote host. Further details about SSH can be found in Chapter 14, SSH: Secure Network Operations.
If you do not consider the host where you log in to be a secure host, do not use X forwarding. If X forwarding is enabled, an attacker could authenticate via your SSH connection. The attacker could then intrude on your X server and, for example, read your keyboard input.
As discussed in
Section 1.1.1.4, “Buffer Overflows and Format String Bugs”, buffer
overflows and format string bugs should be classified as issues
applying to both local and network security. As with the local variants
of such bugs, buffer overflows in network programs, when successfully
exploited, are mostly used to obtain
root permissions. Even if
that is not the case, an attacker could use the bug to gain access to
an unprivileged local account to exploit other vulnerabilities that
might exist on the system.
Buffer overflows and format string bugs exploitable over a network link are certainly the most frequent form of remote attacks, in general. Exploits for these—programs to exploit these newly-found security holes—are often posted on security mailing lists. They can be used to target the vulnerability without knowing the details of the code.
Experience has shown that the availability of exploit codes has contributed to more secure operating systems, as they force operating system makers to fix problems in their software. With free software, anyone has access to the source code (openSUSE Leap comes with complete source code) and anyone who finds a vulnerability and its exploit code can submit a patch to fix the corresponding bug.
The purpose of a denial of service (DoS) attack is to block a server program or even an entire system. This can be achieved in several ways: overloading the server, keeping it busy with garbage packets, or exploiting a remote buffer overflow. Often, a DoS attack is made with the sole purpose of making the service disappear. However, when a given service has become unavailable, communications could become vulnerable to man-in-the-middle attacks (sniffing, TCP connection hijacking, spoofing) and DNS poisoning.
In general, any remote attack performed by an attacker who puts himself between the communicating hosts is called a man-in-the-middle attack. What almost all types of man-in-the-middle attacks have in common is that the victim is usually not aware that there is something happening. There are many variants. For example, the attacker could pick up a connection request and forward that to the target machine. Now the victim has unwittingly established a connection with the wrong host, because the other end is posing as the legitimate destination machine.
The simplest form of a man-in-the-middle attack is called sniffer (the attacker is “only” listening to the network traffic passing by). As a more complex attack, the “man in the middle” could try to take over an already established connection (hijacking). To do so, the attacker would need to analyze the packets for some time to be able to predict the TCP sequence numbers belonging to the connection. When the attacker finally seizes the role of the target host, the victims notice this, because they get an error message saying the connection was terminated because of a failure. That there are protocols not secured against hijacking through encryption (which only perform a simple authentication procedure upon establishing the connection) makes it easier for attackers.
Spoofing is an attack where packets are modified
to contain counterfeit source data, usually the IP address. Most active
forms of attack rely on sending out such fake packets (something that,
on a Linux machine, can only be done by the superuser
(root)).
Many of the attacks mentioned are carried out in combination with a DoS. If an attacker sees an opportunity to bring down a certain host abruptly, even if only for a short time, it makes it easier for him to push the active attack, because the host cannot interfere with the attack for some time.
DNS poisoning means that the attacker corrupts the cache of a DNS server by replying to it with spoofed DNS reply packets, trying to get the server to send certain data to a victim who is requesting information from that server. Many servers maintain a trust relationship with other hosts, based on IP addresses or host names. The attacker needs a good understanding of the actual structure of the trust relationships among hosts to disguise itself as one of the trusted hosts. Usually, the attacker analyzes some packets received from the server to get the necessary information. The attacker often needs to target a well-timed DoS attack at the name server as well. Protect yourself by using encrypted connections that can verify the identity of the hosts to which to connect.
Worms are often confused with viruses, but there is a clear difference between the two. Unlike viruses, worms do not need to infect a host program to live. Instead, they are specialized to spread as quickly as possible on network structures. The worms that appeared in the past, such as Ramen, Lion, or Adore, used well-known security holes in server programs like bind8. Protection against worms is relatively easy. Given that some time elapses between the discovery of a security hole and the moment the worm hits your server, there is a good chance that an updated version of the affected program is available on time. That is only useful if the administrator actually installs the security updates on the systems in question.
To handle security competently, it is important to observe some recommendations. You may find the following list of rules useful in dealing with basic security concerns:
Get and install the updated packages recommended by security announcements as quickly as possible.
Stay informed about the latest security issues:
http://lists.opensuse.org/opensuse-security-announce/ is the SUSE mailing list for security announcements. It is a first-hand source of information regarding updated packages and includes members of SUSE's security team among its active contributors. You can subscribe to this list on page http://en.opensuse.org/openSUSE:Mailing_lists.
Find SUSE security advisories at https://www.suse.com//security/cve/.
bugtraq@securityfocus.com is one of the best-known
security mailing lists worldwide. Reading this list, which receives
between 15 and 20 postings per day, is recommended. More information
can be found at http://www.securityfocus.com.
Discuss any security issues of interest on our mailing list
opensuse-security@opensuse.org.
According to the rule of using the most restrictive set of permissions
possible for every job, avoid doing your regular jobs as
root. This reduces the risk
of getting a cuckoo egg or a virus and protects you from your own
mistakes.
If possible, always try to use encrypted connections to work on a
remote machine. Using ssh (secure shell) to replace
telnet, ftp,
rsh, and rlogin should be
standard practice.
Avoid using authentication methods based solely on IP addresses.
Try to keep the most important network-related packages up-to-date and
subscribe to the corresponding mailing lists to receive announcements
on new versions of such programs (bind,
postfix, ssh, etc.). The same
should apply to software relevant to local security.
Change the /etc/permissions file to optimize
the permissions of files crucial to your system's security. If you
remove the setuid bit from a program, it might well be that it
cannot do its job anymore in the intended way. On the other hand,
the program will usually have ceased to be a potential security
risk. You might take a similar approach with world-writable
directories and files.
Disable any network services you do not absolutely require for your
server to work properly. This makes your system safer. Open ports, with
the socket state LISTEN, can be found with the program
netstat. As for the options, it is recommended to
use netstat -ap or
netstat -anp. The
-p option allows you to see which process is occupying
a port under which name.
Compare the netstat results with those of a thorough
port scan done from outside your host. An excellent program for this
job is nmap, which not only checks out the ports of
your machine, but also draws some conclusions as to which services are
waiting behind them. However, port scanning may be interpreted as an
aggressive act, so do not do this on a host without the explicit
approval of the administrator. Finally, remember that it is important
not only to scan TCP ports, but also UDP ports (options
-sS and -sU).
To monitor the integrity of the files of your system in a reliable way,
use the program AIDE (Advanced Intrusion Detection
Environment), available on openSUSE Leap. Encrypt the database
created by AIDE to prevent someone from tampering with it. Furthermore,
keep a backup of this database available outside your machine, stored
on an external data medium not connected to it by a network link.
Take proper care when installing any third-party software. There have been cases where a hacker had built a Trojan horse into the TAR archive of a security software package, which was fortunately discovered very quickly. If you install a binary package, have no doubts about the site from which you downloaded it.
SUSE's RPM packages are gpg-signed. The key used by SUSE for signing is:
ID:9C800ACA 2000-10-19 SUSE Package Signing Key <build@suse.de>
Key fingerprint = 79C1 79B2 E1C8 20C1 890F 9994 A84E DAE8 9C80 0ACA
The command rpm --checksig
package.rpm shows whether the checksum and the signature of an
uninstalled package are correct. Find the key on the first CD of the
distribution and on most key servers worldwide.
Check backups of user and system files regularly. Consider that if you do not test whether the backup works, it might actually be worthless.
Check your log files. Whenever possible, write a small script to search for suspicious entries. Admittedly, this is not exactly a trivial task. In the end, only you can know which entries are unusual and which are not.
Use tcp_wrapper to restrict access to the individual
services running on your machine, so you have explicit control over
which IP addresses can connect to a service. For further information
regarding tcp_wrapper, consult the manual pages of
tcpd and hosts_access (man 8
tcpd,
man hosts_access).
Use SuSEfirewall to enhance the security provided by
tcpd (tcp_wrapper).
Design your security measures to be redundant: a message seen twice is much better than no message.
If you use suspend to disk, consider configuring the suspend image
encryption using the configure-suspend-encryption.sh
script. The program creates the key, copies it to
/etc/suspend.key, and modifies
/etc/suspend.conf to use encryption for suspend
images.
If you discover a security-related problem (check the available update packages first), write an e-mail to <security@suse.de>. Include a detailed description of the problem and the version number of the package concerned. SUSE will try to send a reply when possible. You are encouraged to pgp-encrypt your e-mail messages. SUSE's PGP key is:
ID:3D25D3D9 1999-03-06 SUSE Security Team <security@suse.de> Key fingerprint = 73 5F 2E 99 DF DB 94 C4 8F 5A A3 AE AF 22 F2 D5
This key is also available for download from http://www.suse.com/support/security/contact.html.
Linux uses PAM (pluggable authentication modules) in the authentication process as a layer that mediates between user and application. PAM modules are available on a systemwide basis, so they can be requested by any application. This chapter describes how the modular authentication mechanism works and how it is configured.
When multiple Unix systems in a network access common resources, it becomes imperative that all user and group identities are the same for all machines in that network. The network should be transparent to users: their environments should not vary, regardless of which machine they are actually using. This can be done by means of NIS and NFS services. NFS distributes file systems over a network and is discussed in Chapter 22, Sharing File Systems with NFS.
NIS (Network Information Service) can be described as a database-like
service that provides access to the contents of
/etc/passwd, /etc/shadow, and
/etc/group across networks. NIS can also be used
for other purposes (making the contents of files like
/etc/hosts or /etc/services
available, for example), but this is beyond the scope of this
introduction. People often refer to NIS as YP,
because it works like the network's “yellow pages.”
The Authentication Server is based on LDAP and optionally Kerberos. On openSUSE Leap you can configure it with a YaST wizard.
For more information about LDAP, see Chapter 5, LDAP—A Directory Service, and about Kerberos, see Chapter 6, Network Authentication with Kerberos.
The Lightweight Directory Access Protocol (LDAP) is a set of protocols designed to access and maintain information directories. LDAP can be used for user and group management, system configuration management, address management, and more. This chapter provides a basic understanding of how OpenLDAP works.
Kerberos is a network authentication protocol which also provides encryption. This chapter describes how to set up Kerberos and integrate services like LDAP and NFS.
Active Directory* (AD) is a directory-service based on LDAP, Kerberos, and other services. It is used by Microsoft* Windows* to manage resources, services, and people. In a Microsoft Windows network, Active Directory provides information about these objects, restricts access to them, and enforces po…
Linux uses PAM (pluggable authentication modules) in the authentication process as a layer that mediates between user and application. PAM modules are available on a systemwide basis, so they can be requested by any application. This chapter describes how the modular authentication mechanism works and how it is configured.
System administrators and programmers often want to restrict access to certain parts of the system or to limit the use of certain functions of an application. Without PAM, applications must be adapted every time a new authentication mechanism, such as LDAP, Samba, or Kerberos, is introduced. However, this process is time-consuming and error-prone. One way to avoid these drawbacks is to separate applications from the authentication mechanism and delegate authentication to centrally managed modules. Whenever a newly required authentication scheme is needed, it is sufficient to adapt or write a suitable PAM module for use by the program in question.
The PAM concept consists of:
PAM modules, which are a set of shared libraries for a specific authentication mechanism.
A module stack with of one or more PAM modules.
A PAM-aware service which needs authentication by
using a module stack or PAM modules. Usually a service is a familiar
name of the corresponding application, like login or
su. The service name other is a
reserved word for default rules.
Module arguments, with which the execution of a single PAM module can be influenced.
A mechanism evaluating each result of a single PAM module execution. A positive value executes the next PAM module. The way a negative value is dealt with depends on the configuration: “no influence, proceed” up to “terminate immediately” and anything in between are valid options.
PAM can be configured in two ways:
/etc/pam.conf)
The configuration of each service is stored in
/etc/pam.conf. However, for maintenance and
usability reasons, this configuration scheme is not used in
openSUSE Leap.
/etc/pam.d/)
Every service (or program) that relies on the PAM mechanism has its
own configuration file in the /etc/pam.d/
directory. For example, the service for
sshd can be found in the
/etc/pam.d/sshd file.
The files under /etc/pam.d/ define the PAM modules
used for authentication. Each file consists of lines, which define a
service, and each line consists of a maximum of four components:
TYPE CONTROL MODULE_PATH MODULE_ARGS
The components have the following meaning:
Declares the type of the service. PAM modules are processed as stacks. Different types of modules have different purposes. For example, one module checks the password, another verifies the location from which the system is accessed, and yet another reads user-specific settings. PAM knows about four different types of modules:
auth
Check the user's authenticity, traditionally by querying a password. However, this can also be achieved with a chip card or through biometrics (for example, fingerprints or iris scan).
account
Modules of this type check if the user has general permission to use the requested service. As an example, such a check should be performed to ensure that no one can log in with the user name of an expired account.
password
The purpose of this type of module is to enable the change of an authentication token. Usually this is a password.
session
Modules of this type are responsible for managing and configuring user sessions. They are started before and after authentication to log login attempts and configure the user's specific environment (mail accounts, home directory, system limits, etc.).
Indicates the behavior of a PAM module. Each module can have the following control flags:
required
A module with this flag must be successfully processed before the
authentication may proceed. After the failure of a module with the
required flag, all other modules with the same
flag are processed before the user receives a message about the
failure of the authentication attempt.
requisite
Modules having this flag must also be processed successfully, in
much the same way as a module with the required
flag. However, in case of failure a module with this flag gives
immediate feedback to the user and no further modules are
processed. In case of success, other modules are subsequently
processed, like any modules with the required
flag. The requisite flag can be used as a basic
filter checking for the existence of certain conditions that are
essential for a correct authentication.
sufficient
After a module with this flag has been successfully processed, the
requesting application receives an immediate message about the
success and no further modules are processed, provided there was no
preceding failure of a module with the required
flag. The failure of a module with the
sufficient flag has no direct consequences, in
the sense that any subsequent modules are processed in their
respective order.
optional
The failure or success of a module with this flag does not have any direct consequences. This can be useful for modules that are only intended to display a message (for example, to tell the user that mail has arrived) without taking any further action.
include
If this flag is given, the file specified as argument is inserted at this place.
Contains a full file name of a PAM module. It does not need to be
specified explicitly, as long as the module is located in the default
directory /lib/security (for all 64-bit platforms
supported by openSUSE® Leap, the directory is
/lib64/security).
Contains a space-separated list of options to influence the behavior
of a PAM module, such as debug (enables debugging) or
nullok (allows the use of empty passwords).
In addition, there are global configuration files for PAM modules under
/etc/security, which define the exact behavior of
these modules (examples include pam_env.conf and
time.conf). Every application that uses a PAM module
actually calls a set of PAM functions, which then process the information
in the various configuration files and return the result to the
requesting application.
To simplify the creation and maintenance of PAM modules, common default
configuration files for the types auth,
account, password, and
session modules have been introduced. These are
retrieved from every application's PAM configuration. Updates to the
global PAM configuration modules in common-* are
thus propagated across all PAM configuration files without requiring the
administrator to update every single PAM configuration file.
The global PAM configuration files are maintained using the
pam-config tool. This tool automatically adds new
modules to the configuration, changes the configuration of existing ones
or deletes modules (or options) from the configurations. Manual
intervention in maintaining PAM configurations is minimized or no longer
required.
When using a 64-bit operating system, it is possible to also include a runtime environment for 32-bit applications. In this case, make sure that you also install the 32-bit version of the PAM modules.
Consider the PAM configuration of sshd as an example:
/etc/pam.d/sshd) ##%PAM-1.0 1 auth requisite pam_nologin.so 2 auth include common-auth 3 account requisite pam_nologin.so 2 account include common-account 3 password include common-password 3 session required pam_loginuid.so 4 session include common-session 3 session optional pam_lastlog.so silent noupdate showfailed 5
Declares the version of this configuration file for PAM 1.0. This is merely a convention, but could be used in the future to check the version. | |
Checks, if | |
Refers to the configuration files of four module types:
| |
Sets the login uid process attribute for the process that was authenticated. | |
Displays information about the last login of a user. |
By including the configuration files instead of adding each module separately to the respective PAM configuration, you automatically get an updated PAM configuration when an administrator changes the defaults. Formerly, you needed to adjust all configuration files manually for all applications when changes to PAM occurred or a new application was installed. Now the PAM configuration is made with central configuration files and all changes are automatically inherited by the PAM configuration of each service.
The first include file (common-auth) calls three
modules of the auth type:
pam_env.so,
pam_gnome_keyring.so and
pam_unix.so. See
Example 2.2, “Default Configuration for the auth Section (common-auth)”.
auth Section (common-auth) #auth required pam_env.so 1 auth optional pam_gnome_keyring.so 2 auth required pam_unix.so try_first_pass 3
| |
| |
|
The whole stack of auth modules is processed before
sshd gets any feedback about
whether the login has succeeded. All modules of the stack having the
required control flag must be processed successfully
before sshd receives a message
about the positive result. If one of the modules is not successful, the
entire module stack is still processed and only then is
sshd notified about the negative
result.
When all modules of the auth type have been
successfully processed, another include statement is processed, in this
case, that in Example 2.3, “Default Configuration for the account Section (common-account)”.
common-account contains only one module,
pam_unix. If pam_unix returns the
result that the user exists, sshd receives a message announcing this
success and the next stack of modules (password) is
processed, shown in Example 2.4, “Default Configuration for the password Section (common-password)”.
account Section (common-account) #account required pam_unix.so try_first_pass
password Section (common-password) #password requisite pam_cracklib.so password optional pam_gnome_keyring.so use_authtok password required pam_unix.so use_authtok nullok shadow try_first_pass
Again, the PAM configuration of
sshd involves only an include
statement referring to the default configuration for
password modules located in
common-password. These modules must successfully be
completed (control flags requisite and
required) whenever the application requests the change
of an authentication token.
Changing a password or another authentication token requires a security
check. This is achieved with the pam_cracklib
module. The pam_unix module used afterward carries
over any old and new passwords from pam_cracklib, so
the user does not need to authenticate again after changing the password.
This procedure makes it impossible to circumvent the checks carried out
by pam_cracklib. Whenever the
account or the auth type are
configured to complain about expired passwords, the
password modules should also be used.
session Section (common-session) #session required pam_limits.so session required pam_unix.so try_first_pass session optional pam_umask.so session optional pam_systemd.so session optional pam_gnome_keyring.so auto_start only_if=gdm,gdm-password,lxdm,lightdm session optional pam_env.so
As the final step, the modules of the session type
(bundled in the common-session file) are called to
configure the session according to the settings for the user in question.
The pam_limits module loads the file
/etc/security/limits.conf, which may define limits
on the use of certain system resources. The pam_unix
module is processed again. The pam_umask module can
be used to set the file mode creation mask. Since this module carries the
optional flag, a failure of this module would not
affect the successful completion of the entire session module stack. The
session modules are called a second time when the user
logs out.
Some PAM modules are configurable. The configuration files are
located in /etc/security. This section briefly
describes the configuration files relevant to the sshd
example—pam_env.conf and
limits.conf.
pam_env.conf can be used to define a standardized
environment for users that is set whenever the
pam_env module is called. With it, preset
environment variables using the following syntax:
VARIABLE [DEFAULT=VALUE] [OVERRIDE=VALUE]
Name of the environment variable to set.
[DEFAULT=<value>]
Default VALUE the administrator wants to set.
[OVERRIDE=<value>]
Values that may be queried and set by
pam_env, overriding the default value.
A typical example of how pam_env can be used is
the adaptation of the DISPLAY variable, which is changed
whenever a remote login takes place. This is shown in
Example 2.6, “pam_env.conf”.
REMOTEHOST DEFAULT=localhost OVERRIDE=@{PAM_RHOST}
DISPLAY DEFAULT=${REMOTEHOST}:0.0 OVERRIDE=${DISPLAY}
The first line sets the value of the REMOTEHOST variable
to localhost, which is used whenever
pam_env cannot determine any other value. The
DISPLAY variable in turn contains the value of
REMOTEHOST. Find more information in the comments in
/etc/security/pam_env.conf.
The purpose of pam_mount is to mount user home
directories during the login process, and to unmount them during logout
in an environment where a central file server keeps all the home
directories of users. With this method, it is not necessary to mount a
complete /home directory where all the user home
directories would be accessible. Instead, only the home directory of the
user who is about to log in, is mounted.
After installing pam_mount, a template for
pam_mount.conf.xml is available in
/etc/security. The description of the various
elements can be found in the manual page man 5
pam_mount.conf.
A basic configuration of this feature can be done with YaST. Select › › to add the file server; see Section 21.5, “Configuring Clients”.
System limits can be set on a user or group basis in
limits.conf, which is read by the
pam_limits module. The file allows you to set
hard limits, which may not be exceeded, and soft limits, which
may be exceeded temporarily. For more information about the syntax and
the options, see the comments in
/etc/security/limits.conf.
The pam-config tool helps you configure the global PAM
configuration files (/etc/pam.d/common-*) and
several selected application configurations. For a list of supported
modules, use the pam-config --list-modules command.
Use the pam-config command to maintain your PAM
configuration files. Add new modules to your PAM configurations, delete
other modules or modify options to these modules. When changing global
PAM configuration files, no manual tweaking of the PAM setup for
individual applications is required.
A simple use case for pam-config involves the
following:
Auto-generate a fresh Unix-style PAM configuration.
Let pam-config create the simplest possible setup which you can extend
later on. The pam-config --create command creates a
simple Unix authentication configuration. Pre-existing configuration
files not maintained by pam-config are overwritten, but backup copies
are kept as *.pam-config-backup.
Add a new authentication method.
Adding a new authentication method (for example, LDAP) to your stack
of PAM modules comes down to a simple pam-config --add
--ldap command. LDAP is added wherever appropriate across
all common-*-pc PAM configuration files.
Add debugging for test purposes.
To make sure the new authentication procedure works as planned, turn
on debugging for all PAM-related operations. The pam-config
--add --ldap-debug turns on debugging for LDAP-related PAM
operations. Find the debugging output in the systemd journal (see
Chapter 11, journalctl: Query the systemd Journal).
Query your setup.
Before you finally apply your new PAM setup, check if it contains all
the options you wanted to add. The pam-config --query
-- MODULE lists both the type and
the options for the queried PAM module.
Remove the debug options.
Finally, remove the debug option from your setup when you are entirely
satisfied with the performance of it. The pam-config --delete
--ldap-debug command turns off debugging for LDAP
authentication. In case you had debugging options added for other
modules, use similar commands to turn these off.
For more information on the pam-config command and the
options available, refer to the manual page of
pam-config(8).
If you prefer to manually create or maintain your PAM configuration
files, make sure to disable pam-config for these
files.
When you create your PAM configuration files from scratch using the
pam-config --create command, it creates symbolic links
from the common-* to the
common-*-pc files.
pam-config only modifies the
common-*-pc configuration
files. Removing these symbolic links effectively disables pam-config,
because pam-config only operates on the
common-*-pc files and
these files are not put into effect without the symbolic links.
pam_systemd.so into Configuration
If you are creating your own PAM configuration, make sure to include
a session optional pam_systemd.so. Not including
the pam_systemd.so can cause problems with
systemd task limits. For details, refer to the man page of
pam_systemd.so.
In the /usr/share/doc/packages/pam directory after
installing the pam-doc package, find the
following additional documentation:
In the top level of this directory, there is the
modules subdirectory holding README files about
the available PAM modules.
This document comprises everything that the system administrator should know about PAM. It discusses a range of topics, from the syntax of configuration files to the security aspects of PAM.
This document summarizes the topic from the developer's point of view, with information about how to write standard-compliant PAM modules.
This document comprises everything needed by an application developer who wants to use the PAM libraries.
PAM in general and the individual modules come with manual pages that provide a good overview of the functionality of all the components.
When multiple Unix systems in a network access common resources, it becomes imperative that all user and group identities are the same for all machines in that network. The network should be transparent to users: their environments should not vary, regardless of which machine they are actually using. This can be done by means of NIS and NFS services. NFS distributes file systems over a network and is discussed in Chapter 22, Sharing File Systems with NFS.
NIS (Network Information Service) can be described as a database-like
service that provides access to the contents of
/etc/passwd, /etc/shadow, and
/etc/group across networks. NIS can also be used
for other purposes (making the contents of files like
/etc/hosts or /etc/services
available, for example), but this is beyond the scope of this
introduction. People often refer to NIS as YP,
because it works like the network's “yellow pages.”
To distribute NIS information across networks, either install one single server (a master) that serves all clients, or NIS slave servers requesting this information from the master and relaying it to their respective clients.
To configure just one NIS server for your network, proceed with Section 3.1.1, “Configuring a NIS Master Server”.
If your NIS master server needs to export its data to slave servers, set up the master server as described in Section 3.1.1, “Configuring a NIS Master Server” and set up slave servers in the subnets as described in Section 3.1.2, “Configuring a NIS Slave Server”.
To manage the NIS Server functionality with YaST, install the yast2-nis-server package by running the zypper in yast2-nis-server command as root. To configure a NIS master server for your network, proceed as follows:
Start › › .
If you need just one NIS server in your network or if this server is to act as the master for further NIS slave servers, select . YaST installs the required packages.
If NIS server software is already installed on your machine, initiate the creation of a NIS master server by clicking .
Determine basic NIS setup options:
Enter the NIS domain name.
Define whether the host should also be a NIS client (enabling users to log in and access data from the NIS server) by selecting .
If your NIS server needs to act as a master server to NIS slave servers in other subnets, select .
The option is only useful with . It speeds up the transfer of maps to the slaves.
Select to allow users
in your network (both local users and those managed through the NIS
server) to change their passwords on the NIS server (with the
command yppasswd). This makes the options
and available. “GECOS”
means that the users can also change their names and address
settings with the command ypchfn.
“Shell” allows users to change their default shell with
the command ypchsh (for example, to switch from
Bash to sh). The new shell must be one of the predefined entries in
/etc/shells.
Select to have YaST adapt the firewall settings for the NIS server.
Leave this dialog with or click to make additional settings.
include changing the source
directory of the NIS server (/etc by default).
In addition, passwords can be merged here. The setting should be
to create the user database from the system
authentication files /etc/passwd,
/etc/shadow, and
/etc/group. Also, determine the smallest user
and group ID that should be offered by NIS. Click
to confirm your settings and return to the
previous screen.
If you previously enabled , enter the host names used as slaves and click . If no slave servers exist, this configuration step is skipped.
Continue to the dialog for the database configuration. Specify the NIS Server Maps, the partial databases to transfer from the NIS server to the client. The default settings are usually adequate. Leave this dialog with .
Check which maps should be available and click to continue.
Determine which hosts are allowed to query the NIS server. You can add, edit, or delete hosts by clicking the appropriate button. Specify from which networks requests can be sent to the NIS server. Normally, this is your internal network. In this case, there should be the following two entries:
255.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0
The first entry enables connections from your own host, which is the NIS server. The second one allows all hosts to send requests to the server.
Click to save your changes and exit the setup.
To configure additional NIS slave servers in your network, proceed as follows:
Start › › .
Select and click .
If NIS server software is already installed on your machine, initiate the creation of a NIS slave server by clicking .
Complete the basic setup of your NIS slave server:
Enter the NIS domain.
Enter host name or IP address of the master server.
Set if you want to enable user logins on this server.
Adapt the firewall settings with .
Click .
Enter the hosts that are allowed to query the NIS server. You can add, edit, or delete hosts by clicking the appropriate button. Specify all networks from which requests can be sent to the NIS server. If it applies to all networks, use the following configuration:
255.0.0.0 127.0.0.0 0.0.0.0 0.0.0.0
The first entry enables connections from your own host, which is the NIS server. The second one allows all hosts with access to the same network to send requests to the server.
Click to save changes and exit the setup.
To use NIS on a workstation, do the following:
Start › › .
Activate the button.
Enter the NIS domain. This is usually a domain name given by your administrator or a static IP address received by DHCP. For information about DHCP, see Chapter 20, DHCP.
Enter your NIS servers and separate their addresses by spaces. If you do not know your NIS server, click to let YaST search for any NIS servers in your domain. Depending on the size of your local network, this may be a time-consuming process. asks for a NIS server in the local network after the specified servers fail to respond.
Depending on your local installation, you may also want to activate the automounter. This option also installs additional software if required.
If you do not want other hosts to be able to query which server your
client is using, go to the settings and
disable . By checking
, the client is enabled to receive
replies from a server communicating through an unprivileged port. For
further information, see
man ypbind.
Click to save them and return to the YaST control center. Your client is now configured with NIS.
The Authentication Server is based on LDAP and optionally Kerberos. On openSUSE Leap you can configure it with a YaST wizard.
For more information about LDAP, see Chapter 5, LDAP—A Directory Service, and about Kerberos, see Chapter 6, Network Authentication with Kerberos.
To set up an authentication server for user account data, make sure the
yast2-auth-server,
openldap2,
krb5-server, and
krb5-client packages are installed; YaST will
remind you and install them if one of these packages is missing. For
Kerberos support, the krb5-plugin-kdb-ldap
package is required.
The first part of the Authentication Server configuration with YaST is setting up an LDAP server, then you can enable Kerberos.
Start YaST as root and select › to invoke the configuration wizard.
Configure the of your LDAP server (you can change these settings later)—see Figure 4.1, “YaST Authentication Server Configuration”:
Set LDAP to be started.
If the LDAP server should announce its services via SLP, check .
Configure .
Click .
Select the server type: , , or .
Select security options ().
It is strongly recommended to . For more information, see Procedure 4.2, “Editing Authentication Server Configuration”, Step 4.
When using authentication without enabling transport encryption using TLS, the password will be transmitted in the clear.
Also consider using LDAP over SSL with certificates.
Confirm with entering an and then clicking —see Figure 4.2, “YaST LDAP Server—New Database”.
In the dialog, decide whether to enable Kerberos authentication or not (you can change these settings later)—see Figure 4.3, “YaST Kerberos Authentication”.
Choose whether Kerberos support is needed or not. If you enable it, also specify your . Then confirm with .
The allows you to specify various aspects such as or ports to use.
Finally, check the and click to exit the configuration wizard.
For changes or additional configuration start the Authentication Server module again and in the left pane expand to make subentries visible—see Figure 4.4, “YaST Editing Authentication Server Configuration”:
With , configure the degree of logging activity (verbosity) of the LDAP server. From the predefined list, select or deselect logging options according to your needs. The more options are enabled, the larger your log files grow.
Configure which connection types the server should offer under . Choose from:
This option enables connection requests (bind requests) from clients using the previous version of the protocol (LDAPv2).
Normally, the LDAP server denies any authentication attempts with empty credentials, that is, a distinguished name (DN) or a password. However, enabling this option makes it possible to connect with a password and no DN to establish an anonymous connection.
Enabling this option makes it possible to connect without authentication (anonymously) using a distinguished name (DN) but no password.
Enabling this option allows non-authenticated (anonymous) update operations. Access is restricted according to ACLs and other rules.
also lets you configure the server flags. Choose from:
The server will no longer accept anonymous bind requests. Note, that this does not generally prohibit anonymous directory access.
Completely disable Simple Bind authentication.
The server will no longer force an authenticated connection back to the anonymous state when receiving the StartTLS operation.
The server will disallow the StartTLS operation on already authenticated connections.
To configure secure communication between client and server, proceed with :
Activate to enable TLS and SSL encryption of the client/server communication.
Either by specifying the exact path to its location or enable the . If the is not available, because it has not been created during installation, go for first—for more information, see Section 17.2, “YaST Modules for CA Management”.
Add Schema files to be included in the server's configuration by selecting in the left part of the dialog. The default selection of schema files applies to the server providing a source of YaST user account data.
YaST allows to add traditional Schema files (usually with a name
ending in .schema) or LDIF files containing Schema
definitions in OpenLDAP's LDIF Schema format.
To configure the databases managed by your LDAP server, proceed as follows:
Select the item in the left part of the dialog.
Click to add a new database.
Specify the requested data:
Enter the base DN (distinguished name) of your LDAP server.
Enter the DN of the administrator in charge of the server. If you
check , only provide the
cn of the administrator and the system fills in
the rest automatically.
Enter the password for the database administrator.
For convenience, check this option if wanted.
In the next dialog, configure replication settings.
In the next dialog, enable enforcement of password policies to provide extra security to your LDAP server:
Check to be able to specify a password policy.
Activate to have clear text passwords be hashed before they are written to the database whenever they are added or modified.
provides a relevant error message for bind requests to locked accounts.
Do not use the option if your environment is sensitive to security issues, because the “Locked Account” error message provides security-sensitive information that can be exploited by a potential attacker.
Enter the DN of the default policy object. To use a DN other than the one suggested by YaST, enter your choice. Otherwise, accept the default settings.
Complete the database configuration by clicking .
If you have not opted for password policies, your server is ready to run at this point. If you have chosen to enable password policies, proceed with the configuration of the password policy in detail. If you have chosen a password policy object that does not yet exist, YaST creates one:
Enter the LDAP server password. In the navigation tree below expand your database object and activate the item.
Make sure is activated. Then click .
Configure the password change policies:
Determine the number of passwords stored in the password history. Saved passwords may not be reused by the user.
Determine if users can change their passwords and if they will need to change their passwords after a reset by the administrator. Require the old password for password changes (optional).
Determine whether and to what extent passwords should be subject to quality checking. Set the minimum password length that must be met before a password is valid. If you select , users are allowed to use encrypted passwords, even though the quality checks cannot be performed. If you opt for only those passwords that pass the quality tests are accepted as valid.
Configure the password time-limit policies:
Determine the minimum password time-limit (the time that needs to pass between two valid password changes) and the maximum password time limit.
Determine the time between a password expiration warning and the actual password expiration.
Set the number of postponement uses of an expired password before the password expires permanently.
Configure the lockout policies:
Enable password locking.
Determine the number of bind failures that trigger a password lock.
Determine the duration of the password lock.
Determine the length of time that password failures are kept in the cache before they are purged.
Apply your password policy settings with .
To edit a previously created database, select its base DN in the tree to the left. In the right part of the window, YaST displays a dialog similar to the one used for the creation of a new database (with the main difference that the base DN entry is grayed out and cannot be changed).
After leaving the Authentication Server configuration by selecting , you are ready to go with a basic working configuration for your Authentication Server. To fine-tune this setup, use OpenLDAP's dynamic configuration back-end.
The OpenLDAP's dynamic configuration back-end stores the configuration
in an LDAP database. That database consists of a set of
.ldif files in
/etc/openldap/slapd.d. There is no need to access
these files directly. To access the settings you can either use the
YaST Authentication Server module (the
yast2-auth-server package) or an LDAP client
such as ldapmodify or ldapsearch.
For more information on the dynamic configuration of OpenLDAP, see the
“OpenLDAP Administration Guide”.
For editing LDAP users and groups with YaST, see Section 5.4, “Configuring LDAP Users and Groups in YaST”.
YaST allows setting up authentication to clients using different modules:
. Use both an identity service (usually LDAP) and a user authentication service (usually Kerberos). This option is based on SSSD and in the majority of cases is best suited for joining Active Directory domains.
This module is described in Section 7.3.2, “ Joining Active Directory Using ”.
.
Join an Active Directory (which entails use of Kerberos and LDAP). This option is
based on winbind and is best suited for joining an
Active Directory domain if support for NTLM or cross-forest trusts is necessary.
This module is described in Section 7.3.3, “ Joining Active Directory Using ”.
. Allows setting up LDAP identities and Kerberos authentication independently from each other and provides fewer options. While this module also uses SSSD, it is not as well suited for connecting to Active Directory as the previous two options.
This module is described in:
Two of the YaST modules are based on SSSD: and .
SSSD stands for System Security Services Daemon. SSSD talks to remote directory services that provide user data and provides various authentication methods, such as LDAP, Kerberos, or Active Directory (AD). It also provides an NSS (Name Service Switch) and PAM (Pluggable Authentication Module) interface.
SSSD can locally cache user data and then allow users to use the data, even if the real directory service is (temporarily) unreachable.
After running one of the YaST authentication modules, you can check whether SSSD is running with:
root #systemctl status sssdsssd.service - System Security Services Daemon Loaded: loaded (/usr/lib/systemd/system/sssd.service; enabled) Active: active (running) since Thu 2015-10-23 11:03:43 CEST; 5s ago [...]
To allow logging in when the authentication back-end is unavailable, SSSD will use its cache even if it was invalidated. This happens until the back-end is available again.
To invalidate the cache, run sss_cache -E (the
command sss_cache is part of the package
sssd-tools).
To completely remove the SSSD cache, run:
root #systemctl stop sssdrm -f /var/lib/sss/db/*systemctl start sssd
For more information, see the SSSD man
pages sssd.conf (man
sssd.conf) and sssd (man
sssd). There are also man pages for most SSSD modules.
The Lightweight Directory Access Protocol (LDAP) is a set of protocols designed to access and maintain information directories. LDAP can be used for user and group management, system configuration management, address management, and more. This chapter provides a basic understanding of how OpenLDAP works.
In a network environment, it is crucial to keep important information structured and to serve it quickly. A directory service keeps information available in a well-structured and searchable form.
Ideally, a central server stores the data in a directory and distributes it to all clients using a well-defined protocol. The structured data allow a wide range of applications to access them. A central repository reduces the necessary administrative effort. The use of an open and standardized protocol like LDAP ensures that as many client applications as possible can access such information.
A directory in this context is a type of database optimized for quick and effective reading and searching:
To make multiple concurrent reading accesses possible, the number of updates is usually very low. The number of read and write accesses is often limited to a few users with administrative privileges. In contrast, conventional databases are optimized for accepting the largest possible data volume in a short time.
When static data is administered, updates of the existing data sets are very rare. When working with dynamic data, especially when data sets like bank accounts or accounting are concerned, the consistency of the data is of primary importance. If an amount should be subtracted from one place to be added to another, both operations must happen concurrently, within one transaction, to ensure balance over the data stock. Traditional relational databases usually have a very strong focus on data consistency, such as the referential integrity support of transactions. Conversely, short-term inconsistencies are usually acceptable in LDAP directories. LDAP directories often do not have the same strong consistency requirements as relational databases.
The design of a directory service like LDAP is not laid out to support complex update or query mechanisms. All applications are guaranteed to access this service quickly and easily.
Unix system administrators traditionally use NIS (Network Information
Service) for name resolution and data distribution in a network. The
configuration data contained in the files group,
hosts, mail,
netgroup, networks,
passwd, printcap,
protocols, rpc, and
services in the /etc directory
is distributed to clients all over the network. These files can be
maintained without major effort because they are simple text files. The
handling of larger amounts of data, however, becomes increasingly
difficult because of nonexistent structuring.
NIS is only designed for Unix platforms, and is not suitable as a
centralized data administration tool in heterogeneous networks.
Unlike NIS, the LDAP service is not restricted to pure Unix networks. Windows™ servers (starting with Windows 2000) support LDAP as a directory service. The application tasks mentioned above are additionally supported in non-Unix systems.
The LDAP principle can be applied to any data structure that needs to be centrally administered. A few application examples are:
Replacement for the NIS service
Mail routing (postfix)
Address books for mail clients, like Mozilla Thunderbird, Evolution, and Outlook
Administration of zone descriptions for a BIND 9 name server
User authentication with Samba in heterogeneous networks
This list can be extended because LDAP is extensible, unlike NIS. The clearly-defined hierarchical structure of the data simplifies the administration of large amounts of data, as it can be searched more easily.
To get background knowledge on how an LDAP server works and how the data is stored, it is vital to understand the way the data is organized on the server and how this structure enables LDAP to provide fast access to the data. To successfully operate an LDAP setup, you also need to be familiar with some basic LDAP terminology. This section introduces the basic layout of an LDAP directory tree and provides the basic terminology used with regard to LDAP. Skip this introductory section if you already have some LDAP background knowledge and only want to learn how to set up an LDAP environment in openSUSE Leap. Read on at Section 5.5, “Manually Configuring an LDAP Server”.
An LDAP directory has a tree structure. All entries (called objects) of the directory have a defined position within this hierarchy. This hierarchy is called the directory information tree (DIT). The complete path to the desired entry, which unambiguously identifies it, is called the distinguished name or DN. A single node along the path to this entry is called relative distinguished name or RDN.
The relations within an LDAP directory tree become more evident in the following example, shown in Figure 5.1, “Structure of an LDAP Directory”.
The complete diagram is a fictional directory information tree. The
entries on three levels are depicted. Each entry corresponds to one box
in the image. The complete, valid distinguished name
for the fictional employee Geeko
Linux, in this case, is cn=Geeko
Linux,ou=doc,dc=example,dc=com. It is composed by adding the
RDN cn=Geeko Linux to the DN of the preceding entry
ou=doc,dc=example,dc=com.
The types of objects that can be stored in the DIT are globally determined following a Schema. The type of an object is determined by the object class. The object class determines what attributes the relevant object must or can be assigned. The Schema, therefore, must contain definitions of all object classes and attributes used in the desired application scenario. There are a few common Schemas (see RFC 2252 and 2256). The LDAP RFC defines a few commonly used Schemas (see for example, RFC4519). Additionally, Schemas are available for many other use cases (for example, Samba or NIS replacement). It is, however, possible to create custom Schemas or to use multiple Schemas complementing each other (if this is required by the environment in which the LDAP server should operate).
Table 5.1, “Commonly Used Object Classes and Attributes” offers a small overview of the object
classes from core.schema and
inetorgperson.schema used in the example, including
required attributes (Req. Attr.) and valid attribute values.
|
Object Class |
Meaning |
Example Entry |
Req. Attr. |
|---|---|---|---|
|
|
domainComponent (name components of the domain) |
example |
dc |
|
|
organizationalUnit (organizational unit) |
doc |
ou |
|
|
inetOrgPerson (person-related data for the intranet or Internet) |
Geeko Linux |
sn and cn |
Example 5.1, “Excerpt from schema.core” shows an excerpt from a Schema directive with explanations.
attributetype (2.5.4.11 NAME ( 'ou' 'organizationalUnitName') 1 DESC 'RFC2256: organizational unit this object belongs to' 2 SUP name ) 3 objectclass ( 2.5.6.5 NAME 'organizationalUnit' 4 DESC 'RFC2256: an organizational unit' 5 SUP top STRUCTURAL 6 MUST ou 7 MAY (userPassword $ searchGuide $ seeAlso $ businessCategory 8 $ x121Address $ registeredAddress $ destinationIndicator $ preferredDeliveryMethod $ telexNumber $ teletexTerminalIdentifier $ telephoneNumber $ internationaliSDNNumber $ facsimileTelephoneNumber $ street $ postOfficeBox $ postalCode $ postalAddress $ physicalDeliveryOfficeName $ st $ l $ description) ) ...
The attribute type organizationalUnitName and the
corresponding object class organizationalUnit serve as
an example here.
The name of the attribute, its unique OID (object identifier) (numerical), and the abbreviation of the attribute. | |
A brief description of the attribute with | |
| |
The definition of the object class
| |
A brief description of the object class. | |
The | |
With | |
With |
A very good introduction to the use of Schemas can be found in the
OpenLDAP documentation (openldap2-doc). When
installed, find it in
/usr/share/doc/packages/openldap2/adminguide/guide.html.
YaST includes the module that helps define authentication scenarios involving either LDAP or Kerberos.
It can also be used to join Kerberos and LDAP separately. However, in many such cases, using this module may not be the first choice, such as for joining Active Directory (which uses a combination of LDAP and Kerberos). For more information, see Section 4.2, “Configuring an Authentication Client with YaST”.
Start the module by selecting › .
To configure an LDAP client, follow the procedure below:
In the window , click .
Make sure that the tab is chosen.
Specify one or more LDAP server URLs, host names, or IP addresses under . When specifying multiple addresses, separate them with spaces.
Specify the appropriate LDAP distinguished name (DN) under
. For example, a valid entry could
be dc=example,dc=com.
If your LDAP server supports TLS encryption, choose the appropriate security option under .
To first ask the server whether it supports TLS encryption and be able to downgrade to an unencrypted connection if it does not, use .
Activate other options as necessary:
You can and on the local computer for them.
Use to cache LDAP entries locally. However, this bears the danger that entries can be slightly out of date.
Specify the types of data that should be used from the LDAP source, such as and , , and (network-shared drives that can be automatically mounted on request).
Specify the distinguished name (DN) and password of the user under whose name you want to bind to the LDAP directory in and .
Otherwise, if the server supports it, you can also leave both text boxes empty to bind anonymously to the server.
When using authentication without enabling transport encryption using TLS or StartTLS, the password will be transmitted in the clear.
Under , you can additionally configure timeouts for BIND operations.
To check whether the LDAP connection works, click .
To leave the dialog, click . Then wait for the setup to complete.
Finally, click .
The actual registration of user and group data differs only slightly from the procedure when not using LDAP. The following instructions relate to the administration of users. The procedure for administering groups is analogous.
Access the YaST user administration with › .
Use to limit the view of users to the LDAP users and enter the password for Root DN.
Click to enter the user configuration. A dialog with four tabs opens:
Specify the user's name, login name, and password in the tab.
Check the tab for the group membership, login shell, and home directory of the new user. If necessary, change the default to values that better suit your needs.
Modify or accept the default .
Enter the tab, select the LDAP plug-in, and click to configure additional LDAP attributes assigned to the new user.
Click to apply your settings and leave the user configuration.
The initial input form of user administration offers . This allows you to apply LDAP search filters to the set of available users. Alternatively open the module for configuring LDAP users and groups by selecting .
YaST uses OpenLDAP's dynamic configuration database
(back-config) to store the LDAP server's
configuration. For details about the dynamic configuration back-end, see
the slapd-config(5) man page or the OpenLDAP
Software 2.4 Administrator's Guide located at
/usr/share/doc/packages/openldap2/guide/admin/guide.html
on your system if the openldap2 package is
installed.
YaST does not use /etc/openldap/slapd.conf to
store the OpenLDAP configuration anymore. In case of a system upgrade, a
copy of the original /etc/openldap/slapd.conf file
will get created as
/etc/openldap/slapd.conf.YaSTsave.
To conveniently access the configuration back-end, you use SASL external
authentication. For example, the following ldapsearch
command executed as root can show the complete
slapd configuration:
tux > ldapsearch -Y external -H ldapi:/// -b cn=configBasic LDAP Server initialization and configuration can be done within the Authentication Server YaST module. For more information, see Section 4.1, “Configuring an Authentication Server with YaST”.
When the LDAP server is fully configured and all desired entries have
been made according to the pattern described in
Section 5.6, “Manually Administering LDAP Data”, start the LDAP server as
root by entering sudo systemctl start
slapd. To stop the server manually, enter the command
sudo systemctl stop slapd. Query the status of
the running LDAP server with sudo systemctl status
slapd.
Use the YaST , described in
Section 10.4, “Managing Services with YaST”, to have the server started and
stopped automatically on system bootup and shutdown. You can also
create the corresponding links to the start and stop scripts with the
systemctl commands as described
in Section 10.2.1, “Managing Services in a Running System”.
OpenLDAP offers a series of tools for the administration of data in the LDAP directory. The four most important tools for adding to, deleting from, searching through and modifying the data stock are explained in this section.
Once your LDAP server
is correctly configured (it features appropriate entries for
suffix, directory,
rootdn, rootpw and
index), proceed to entering records. OpenLDAP offers
the ldapadd command for this task. If possible, add
the objects to the database in bundles (for practical reasons). LDAP
can process the LDIF format (LDAP data interchange format) for this.
An LDIF file is a simple text file that can contain an arbitrary number
of attribute and value pairs.
The LDIF file for creating a rough framework for the example in
Figure 5.1, “Structure of an LDAP Directory” would look like the one in
Example 5.2, “An LDIF File”.
LDAP works with UTF-8 (Unicode). Umlauts must be encoded correctly.
Otherwise, avoid umlauts and other special characters or use
iconv to convert the input to UTF-8.
# The Organization dn: dc=example,dc=com objectClass: dcObject objectClass: organization o: Example dc: example # The organizational unit development (devel) dn: ou=devel,dc=example,dc=com objectClass: organizationalUnit ou: devel # The organizational unit documentation (doc) dn: ou=doc,dc=example,dc=com objectClass: organizationalUnit ou: doc # The organizational unit internal IT (it) dn: ou=it,dc=example,dc=com objectClass: organizationalUnit ou: it
Save the file with the .ldif suffix then pass it to
the server with the following command:
tux > ldapadd -x -D DN_OF_THE_ADMINISTRATOR -W -f FILE.ldif
-x switches off the authentication with SASL in this
case. -D declares the user that calls the operation.
The valid DN of the administrator is entered here, as it has been
configured in slapd.conf. In the current example,
this is cn=Administrator,dc=example,dc=com.
-W circumvents entering the password on the command
line (in clear text) and activates a separate password prompt.
The -f option passes the file name. See the details
of running ldapadd in
Example 5.3, “ldapadd with example.ldif”.
tux > ldapadd -x -D cn=Administrator,dc=example,dc=com -W -f example.ldif
Enter LDAP password:
adding new entry "dc=example,dc=com"
adding new entry "ou=devel,dc=example,dc=com"
adding new entry "ou=doc,dc=example,dc=com"
adding new entry "ou=it,dc=example,dc=com"
The user data of individuals can be prepared in separate LDIF files.
Example 5.4, “LDIF Data for Tux” adds
Tux to the new LDAP directory.
# coworker Tux dn: cn=Tux Linux,ou=devel,dc=example,dc=com objectClass: inetOrgPerson cn: Tux Linux givenName: Tux sn: Linux mail: tux@example.com uid: tux telephoneNumber: +49 1234 567-8
An LDIF file can contain an arbitrary number of objects. It is possible to pass directory branches (entirely or in part) to the server in one go, as shown in the example of individual objects. If it is necessary to modify some data relatively often, a fine subdivision of single objects is recommended.
The tool ldapmodify is provided for modifying the
data stock. The easiest way to do this is to modify the corresponding
LDIF file and pass the modified file to the LDAP server. To change the
telephone number of colleague Tux from +49 1234 567-8
to +49 1234 567-10, edit the LDIF file like in
Example 5.5, “Modified LDIF File tux.ldif”.
# coworker Tux dn: cn=Tux Linux,ou=devel,dc=example,dc=com changetype: modify replace: telephoneNumber telephoneNumber: +49 1234 567-10
Import the modified file into the LDAP directory with the following command:
tux > ldapmodify -x -D cn=Administrator,dc=example,dc=com -W -f tux.ldif
Alternatively, pass the attributes to change directly to
ldapmodify as follows:
Start ldapmodify and enter your password:
tux > ldapmodify -x -D cn=Administrator,dc=example,dc=com -W
Enter LDAP password:Enter the changes while carefully complying with the syntax in the order presented below:
dn: cn=Tux Linux,ou=devel,dc=example,dc=com changetype: modify replace: telephoneNumber telephoneNumber: +49 1234 567-10
For more information about ldapmodify and its syntax,
see the ldapmodify man page.
OpenLDAP provides, with ldapsearch, a command line
tool for searching data within an LDAP directory and reading data from
it. This is a simple query:
tux > ldapsearch -x -b dc=example,dc=com "(objectClass=*)"
The -b option determines the search base (the section
of the tree within which the search should be performed). In the current
case, this is dc=example,dc=com. To perform a more
finely-grained search in specific subsections of the LDAP directory (for
example, only within the devel department), pass this
section to ldapsearch with -b.
-x requests activation of simple authentication.
(objectClass=*) declares that all objects contained
in the directory should be read. This command option can be used after
the creation of a new directory tree to verify that all entries have
been recorded correctly and the server responds as desired. For more
information about the use of ldapsearch, see the
ldapsearch(1) man page.
Delete unwanted entries with ldapdelete. The syntax
is similar to that of the other commands. To delete, for example, the
complete entry for Tux Linux, issue the following
command:
tux > ldapdelete -x -D cn=Administrator,dc=example,dc=com -W cn=Tux \
Linux,ou=devel,dc=example,dc=comMore complex subjects (like SASL configuration or establishment of a replicating LDAP server that distributes the workload among multiple slaves) were omitted from this chapter. Find detailed information about both subjects in the OpenLDAP 2.4 Administrator's Guide—see at OpenLDAP 2.4 Administrator's Guide.
The Web site of the OpenLDAP project offers exhaustive documentation for beginner and advanced LDAP users:
A detailed question and answer collection applying to the installation, configuration, and use of OpenLDAP. Find it at http://www.openldap.org/faq/data/cache/1.html.
Brief step-by-step instructions for installing your first LDAP server.
Find it at
http://www.openldap.org/doc/admin24/quickstart.html
or on an installed system in Section 2 of
/usr/share/doc/packages/openldap2/guide/admin/guide.html.
A detailed introduction to all important aspects of LDAP
configuration, including access controls and encryption. See
http://www.openldap.org/doc/admin24/ or, on an
installed system,
/usr/share/doc/packages/openldap2/guide/admin/guide.html.
A detailed general introduction to the basic principles of LDAP: http://www.redbooks.ibm.com/redbooks/pdfs/sg244986.pdf.
Printed literature about LDAP:
LDAP System Administration by Gerald Carter (ISBN 1-56592-491-6)
Understanding and Deploying LDAP Directory Services by Howes, Smith, and Good (ISBN 0-672-32316-8)
The ultimate reference material for the subject of LDAP are the corresponding RFCs (request for comments), 2251 to 2256.
An open network provides no means of ensuring that a workstation can identify its users properly, except through the usual password mechanisms. In common installations, the user must enter the password each time a service inside the network is accessed. Kerberos provides an authentication method with which a user registers only once and is trusted in the complete network for the rest of the session. To have a secure network, the following requirements must be met:
Have all users prove their identity for each desired service and make sure that no one can take the identity of someone else.
Make sure that each network server also proves its identity. Otherwise an attacker might be able to impersonate the server and obtain sensitive information transmitted to the server. This concept is called mutual authentication, because the client authenticates to the server and vice versa.
Kerberos helps you meet these requirements by providing strongly encrypted authentication. Only the basic principles of Kerberos are discussed here. For detailed technical instruction, refer to the Kerberos documentation.
The following glossary defines some Kerberos terminology.
Users or clients need to present some kind of credentials that authorize them to request services. Kerberos knows two kinds of credentials—tickets and authenticators.
A ticket is a per-server credential used by a client to authenticate at a server from which it is requesting a service. It contains the name of the server, the client's name, the client's Internet address, a time stamp, a lifetime, and a random session key. All this data is encrypted using the server's key.
Combined with the ticket, an authenticator is used to prove that the client presenting a ticket is really the one it claims to be. An authenticator is built using the client's name, the workstation's IP address, and the current workstation's time, all encrypted with the session key known only to the client and the relevant server. An authenticator can only be used once, unlike a ticket. A client can build an authenticator itself.
A Kerberos principal is a unique entity (a user or service) to which it can assign a ticket. A principal consists of the following components:
USER/INSTANCE@REALM
primary: The first part of the principal. In the case of users, this is usually the same as the user name.
instance (optional):
Additional information characterizing the
primary. This string is separated from the
primary by a /.
tux@example.org and
tux/admin@example.org can both exist on the same
Kerberos system and are treated as different principals.
realm: Specifies the Kerberos realm. Normally, your realm is your domain name in uppercase letters.
Kerberos ensures that both client and server can be sure of each other's identity. They share a session key, which they can use to communicate securely.
Session keys are temporary private keys generated by Kerberos. They are known to the client and used to encrypt the communication between the client and the server for which it requested and received a ticket.
Almost all messages sent in a network can be eavesdropped, stolen, and resent. In the Kerberos context, this would be most dangerous if an attacker manages to obtain your request for a service containing your ticket and authenticator. The attacker could then try to resend it (replay) to impersonate you. However, Kerberos implements several mechanisms to deal with this problem.
Service is used to refer to a specific action to perform. The process behind this action is called a server.
Kerberos is often called a third-party trusted authentication service, which means all its clients trust Kerberos's judgment of another client's identity. Kerberos keeps a database of all its users and their private keys.
To ensure Kerberos is working correctly, run both the authentication and
ticket-granting server on a dedicated machine. Make sure that only the
administrator can access this machine physically and over the network.
Reduce the (networking) services running on it to the absolute
minimum—do not even run
sshd.
Your first contact with Kerberos is quite similar to any login procedure at a normal networking system. Enter your user name. This piece of information and the name of the ticket-granting service are sent to the authentication server (Kerberos). If the authentication server knows you, it generates a random session key for further use between your client and the ticket-granting server. Now the authentication server prepares a ticket for the ticket-granting server. The ticket contains the following information—all encrypted with a session key only the authentication server and the ticket-granting server know:
The names of both, the client and the ticket-granting server
The current time
A lifetime assigned to this ticket
The client's IP address
The newly-generated session key
This ticket is then sent back to the client together with the session key, again in encrypted form, but this time the private key of the client is used. This private key is only known to Kerberos and the client, because it is derived from your user password. Now that the client has received this response, you are prompted for your password. This password is converted into the key that can decrypt the package sent by the authentication server. The package is “unwrapped” and password and key are erased from the workstation's memory. As long as the lifetime given to the ticket used to obtain other tickets does not expire, your workstation can prove your identity.
To request a service from any server in the network, the client application needs to prove its identity to the server. Therefore, the application generates an authenticator. An authenticator consists of the following components:
The client's principal
The client's IP address
The current time
A checksum (chosen by the client)
All this information is encrypted using the session key that the client has already received for this special server. The authenticator and the ticket for the server are sent to the server. The server uses its copy of the session key to decrypt the authenticator, which gives it all the information needed about the client requesting its service, to compare it to that contained in the ticket. The server checks if the ticket and the authenticator originate from the same client.
Without any security measures implemented on the server side, this stage of the process would be an ideal target for replay attacks. Someone could try to resend a request stolen off the net some time before. To prevent this, the server does not accept any request with a time stamp and ticket received previously. In addition to that, a request with a time stamp differing too much from the time the request is received is ignored.
Kerberos authentication can be used in both directions. It is not only a question of the client being the one it claims to be. The server should also be able to authenticate itself to the client requesting its service. Therefore, it sends an authenticator itself. It adds one to the checksum it received in the client's authenticator and encrypts it with the session key, which is shared between it and the client. The client takes this response as a proof of the server's authenticity and they both start cooperating.
Tickets are designed to be used for one server at a time. Therefore, you need to get a new ticket each time you request another service. Kerberos implements a mechanism to obtain tickets for individual servers. This service is called the “ticket-granting service”. The ticket-granting service is a service (like any other service mentioned before) and uses the same access protocols that have already been outlined. Any time an application needs a ticket that has not already been requested, it contacts the ticket-granting server. This request consists of the following components:
The requested principal
The ticket-granting ticket
An authenticator
Like any other server, the ticket-granting server now checks the ticket-granting ticket and the authenticator. If they are considered valid, the ticket-granting server builds a new session key to be used between the original client and the new server. Then the ticket for the new server is built, containing the following information:
The client's principal
The server's principal
The current time
The client's IP address
The newly-generated session key
The new ticket has a lifetime, which is either the remaining lifetime of the ticket-granting ticket or the default for the service. The lesser of both values is assigned. The client receives this ticket and the session key, which are sent by the ticket-granting service. But this time the answer is encrypted with the session key that came with the original ticket-granting ticket. The client can decrypt the response without requiring the user's password when a new service is contacted. Kerberos can thus acquire ticket after ticket for the client without bothering the user.
Ideally, a user only contact with Kerberos happens during login at the workstation. The login process includes obtaining a ticket-granting ticket. At logout, a user's Kerberos tickets are automatically destroyed, which makes it difficult for anyone else to impersonate this user.
The automatic expiration of tickets can lead to a situation when a user's login session lasts longer than the maximum
lifespan given to the ticket-granting ticket (a reasonable setting is 10
hours). However, the user can get a new ticket-granting ticket by running
kinit. Enter the password again and Kerberos obtains
access to desired services without additional authentication. To get a
list of all the tickets silently acquired for you by Kerberos, run
klist.
Here is a short list of applications that use Kerberos authentication.
These applications can be found under
/usr/lib/mit/bin or
/usr/lib/mit/sbin after installing the package
krb5-apps-clients. They all have the full
functionality of their common Unix and Linux brothers plus the additional
bonus of transparent authentication managed by Kerberos:
telnet,
telnetd
rlogin
rsh, rcp,
rshd
ftp, ftpd
ksu
You no longer need to enter your password for using these applications
because Kerberos has already proven your identity.
ssh, if compiled with Kerberos support, can even
forward all the tickets acquired for one workstation to another one. If
you use ssh to log in to another workstation,
ssh makes sure that the encrypted contents of the
tickets are adjusted to the new situation. Simply copying tickets between
workstations is not sufficient because the ticket contains
workstation-specific information (the IP address). XDM and GDM offer
Kerberos support, too. Read more about the Kerberos network applications
in Kerberos V5 UNIX User's Guide at
http://web.mit.edu/kerberos.
A Kerberos environment consists of several components. A key distribution center (KDC) holds the central database with all Kerberos-relevant data. All clients rely on the KDC for proper authentication across the network. Both the KDC and the clients need to be configured to match your setup:
Check your network setup and make sure it meets the minimum requirements outlined in Section 6.5.1, “Kerberos Network Topology”. Choose an appropriate realm for your Kerberos setup, see Section 6.5.2, “Choosing the Kerberos Realms”. Carefully set up the machine that is to serve as the KDC and apply tight security, see Section 6.5.3, “Setting Up the KDC Hardware”. Set up a reliable time source in your network to make sure all tickets contain valid time stamps, see Section 6.5.4, “Configuring Time Synchronization”.
Configure the KDC and the clients, see Section 6.5.5, “Configuring the KDC” and Section 6.5.6, “Configuring Kerberos Clients”. Enable remote administration for your Kerberos service, so you do not need physical access to your KDC machine, see Section 6.5.7, “Configuring Remote Kerberos Administration”. Create service principals for every service in your realm, see Section 6.5.8, “Creating Kerberos Service Principals”.
Various services in your network can use Kerberos. To add Kerberos password-checking to applications using PAM, proceed as outlined in Section 6.5.9, “Enabling PAM Support for Kerberos”. To configure SSH or LDAP with Kerberos authentication, proceed as outlined in Section 6.5.10, “Configuring SSH for Kerberos Authentication” and Section 6.5.11, “Using LDAP and Kerberos”.
Any Kerberos environment must meet the following requirements to be fully functional:
Provide a DNS server for name resolution across your network, so clients and servers can locate each other. Refer to Chapter 19, The Domain Name System for information on DNS setup.
Provide a time server in your network. Using exact time stamps is crucial to a Kerberos setup, because valid Kerberos tickets must contain correct time stamps. Refer to Chapter 18, Time Synchronization with NTP for information on NTP setup.
Provide a key distribution center (KDC) as the center piece of the Kerberos architecture. It holds the Kerberos database. Use the tightest possible security policy on this machine to prevent any attacks on this machine compromising your entire infrastructure.
Configure the client machines to use Kerberos authentication.
The following figure depicts a simple example network with only the minimum components needed to build a Kerberos infrastructure. Depending on the size and topology of your deployment, your setup may vary.
For a setup similar to the one in Figure 6.1, “Kerberos Network Topology”, configure routing between the two subnets (192.168.1.0/24 and 192.168.2.0/24). Refer to Section 13.4.1.5, “Configuring Routing” for more information on configuring routing with YaST.
The domain of a Kerberos installation is called a realm and is
identified by a name, such as EXAMPLE.COM or simply
ACCOUNTING. Kerberos is case-sensitive, so
example.com is actually a different realm than
EXAMPLE.COM. Use the case you prefer. It is common
practice, however, to use uppercase realm names.
It is also a good idea to use your DNS domain name (or a subdomain, such
as ACCOUNTING.EXAMPLE.COM). As shown below, your life
as an administrator can be much easier if you configure your Kerberos
clients to locate the KDC and other Kerberos services via DNS. To do so,
it is helpful if your realm name is a subdomain of your DNS domain name.
Unlike the DNS name space, Kerberos is not hierarchical. So if you have
a realm named EXAMPLE.COM with two “subrealms” named
DEVELOPMENT and
ACCOUNTING, and these subordinate realms do not inherit principals from
EXAMPLE.COM. Instead, you would have three
separate realms, and you would need to configure
cross-realm authentication for each realm, so that users from one realm to interact
with servers or other users from another realm.
For the sake of simplicity, let us assume you are setting up only one
realm for your entire organization. For the remainder of this section,
the realm name EXAMPLE.COM is used in all examples.
The first thing required to use Kerberos is a machine that acts as the key distribution center, or KDC for short. This machine holds the entire Kerberos user database with passwords and all information.
The KDC is the most important part of your security infrastructure—if someone breaks into it, all user accounts and all of your infrastructure protected by Kerberos is compromised. An attacker with access to the Kerberos database can impersonate any principal in the database. Tighten security for this machine as much as possible:
Put the server machine into a physically secured location, such as a locked server room to which only a very few people have access.
Do not run any network applications on it except the KDC. This includes servers and clients—for example, the KDC should not import any file systems via NFS or use DHCP to retrieve its network configuration.
Install a minimal system first then check the list of installed
packages and remove any unneeded packages. This includes servers, such
as inetd,
portmap, and CUPS, plus
anything X-based. Even installing an SSH server should be considered a
potential security risk.
No graphical login is provided on this machine as an X server is a potential security risk. Kerberos provides its own administration interface.
Configure /etc/nsswitch.conf to use only local
files for user and group lookup. Change the lines for
passwd and group to look like
this:
passwd: files group: files
Edit the passwd, group, and
shadow files in /etc and
remove the lines that start with a + character
(these are for NIS lookups).
Disable all user accounts except root's account by editing
/etc/shadow and replacing the hashed passwords
with * or ! characters.
To use Kerberos successfully, make sure that all system clocks within your organization are synchronized within a certain range. This is important because Kerberos protects against replayed credentials. An attacker might be able to observe Kerberos credentials on the network and reuse them to attack the server. Kerberos employs several defenses to prevent this. One of them is that it puts time stamps into its tickets. A server receiving a ticket with a time stamp that differs from the current time rejects the ticket.
Kerberos allows a certain leeway when comparing time stamps. However, computer clocks can be very inaccurate in keeping time—it is not unheard of for PC clocks to lose or gain half an hour during a week. For this reason, configure all hosts on the network to synchronize their clocks with a central time source.
A simple way to do so is by installing an NTP time server on one machine and
having all clients synchronize their clocks with this server. Do this by
running an NTP daemon chronyd as a client on all these machines. The KDC
itself needs to be synchronized to the common time source as well. Because
running an NTP daemon on this machine would be a security risk, it is
probably a good idea to do this by running chronyd -q via
a cron job. To configure your machine as an NTP client, proceed as outlined
in Section 18.1, “Configuring an NTP Client with YaST”.
A different way to secure the time service and still use the NTP daemon is to attach a hardware reference clock to a dedicated NTP server and an additional hardware reference clock to the KDC.
It is also possible to adjust the maximum deviation Kerberos allows when
checking time stamps. This value (called clock
skew) can be set in the krb5.conf file
as described in
Section 6.5.6.3, “Adjusting the Clock Skew”.
This section covers the initial configuration and installation of the KDC, including the creation of an administrative principal. This procedure consists of several steps:
Install the RPMs.
On a machine designated as the KDC, install the following software
packages: krb5,
krb5-server and
krb5-client packages.
Adjust the Configuration Files.
The /etc/krb5.conf and
/var/lib/kerberos/krb5kdc/kdc.conf configuration
files must be adjusted for your scenario. These files contain all
information on the KDC.
Create the Kerberos Database. Kerberos keeps a database of all principal identifiers and the secret keys of all principals that need to be authenticated. Refer to Section 6.5.5.1, “Setting Up the Database” for details.
Adjust the ACL Files: Add Administrators.
The Kerberos database on the KDC can be managed remotely. To prevent
unauthorized principals from tampering with the database, Kerberos
uses access control lists. You must explicitly enable remote access
for the administrator principal to enable them to manage the database.
The Kerberos ACL file is located under
/var/lib/kerberos/krb5kdc/kadm5.acl. Refer to
Section 6.5.7, “Configuring Remote Kerberos Administration” for details.
Adjust the Kerberos Database: Add Administrators. You need at least one administrative principal to run and administer Kerberos. This principal must be added before starting the KDC. Refer to Section 6.5.5.2, “Creating a Principal” for details.
Start the Kerberos Daemon. After the KDC software is installed and properly configured, start the Kerberos daemon to provide Kerberos service for your realm. Refer to Section 6.5.5.3, “Starting the KDC” for details.
Create a Principal for Yourself. You need a principal for yourself. Refer to Section 6.5.5.2, “Creating a Principal” for details.
Your next step is to initialize the database where Kerberos keeps all information about principals. Set up the database master key, which is used to protect the database from accidental disclosure (in particular if it is backed up to tape). The master key is derived from a pass phrase and is stored in a file called the stash file. This is so you do not need to enter the password every time the KDC is restarted. Make sure that you choose a good pass phrase, such as a sentence from a book opened to a random page.
When you make tape backups of the Kerberos database
(/var/lib/kerberos/krb5kdc/principal), do not back
up the stash file (which is in
/var/lib/kerberos/krb5kdc/.k5.EXAMPLE.COM).
Otherwise, everyone able to read the tape could also decrypt the
database. Therefore, keep a copy of the pass phrase in a safe or some
other secure location, because you will need it to restore your
database from backup tape after a crash.
To create the stash file and the database, run:
tux >sudokdb5_util create -r EXAMPLE.COM -s
You will see the following output:
Initializing database '/var/lib/kerberos/krb5kdc/principal' for realm 'EXAMPLE.COM', master key name 'K/M@EXAMPLE.COM' You will be prompted for the database Master Password. It is important that you NOT FORGET this password. Enter KDC database master key: 1 Re-enter KDC database master key to verify: 2
To verify, use the list command:
tux >kadmin.localkadmin>listprincs
You will see several principals in the database, which are for internal use by Kerberos:
K/M@EXAMPLE.COM kadmin/admin@EXAMPLE.COM kadmin/changepw@EXAMPLE.COM krbtgt/EXAMPLE.COM@EXAMPLE.COM
Create two Kerberos principals for yourself: one normal principal for
everyday work and one for administrative tasks relating to Kerberos.
Assuming your login name is exampleuserIII_plain, proceed as follows:
tux >kadmin.localkadmin>ank geeko
You will see the following output:
geeko@EXAMPLE.COM's Password: 1 Verifying password: 2
Next, create another principal named
geeko/admin by typing
ank geeko/admin at
the kadmin prompt. The admin
suffixed to your user name is a role. Later, use
this role when administering the Kerberos database. A user can have
several roles for different purposes. Roles act like completely
different accounts that have similar names.
Start the KDC daemon and the kadmin daemon. To start the daemons manually, enter:
tux >sudosystemctl start krb5kdc sudo systemctl start kadmind
Also make sure that the services KDC (krb5kdc) and
kadmind (kadmind) are started by
default when the server machine is rebooted. Enable them by entering:
tux >sudosystemctl enable krb5kdc kadmind
or by using the YaST .
When the supporting infrastructure is in place (DNS, NTP) and the KDC has been properly configured and started, configure the client machines. To configure a Kerberos client, use one of the two manual approaches described below.
When configuring Kerberos, there are two approaches you can
take—static configuration in the
/etc/krb5.conf file or dynamic configuration with
DNS. With DNS configuration, Kerberos applications try to locate the
KDC services using DNS records. With static configuration, add the host
names of your KDC server to krb5.conf (and update
the file whenever you move the KDC or reconfigure your realm in other
ways).
DNS-based configuration is generally a lot more flexible and the amount
of configuration work per machine is a lot less. However, it requires
that your realm name is either the same as your DNS domain or a
subdomain of it. Configuring Kerberos via DNS also creates a
security issue: an attacker can seriously disrupt your
infrastructure through your DNS (by shooting down the name server,
spoofing DNS records, etc.). However, this amounts to a denial of
service at worst. A similar scenario applies to the static
configuration case unless you enter IP addresses in
krb5.conf instead of host names.
One way to configure Kerberos is to edit
/etc/krb5.conf. The file installed by default
contains various sample entries. Erase all of these entries before
starting. krb5.conf is made up of several
sections (stanzas), each introduced by the section name in brackets
like [this].
To configure your Kerberos clients, add the following stanza to
krb5.conf (where
kdc.example.com is the
host name of the KDC):
[libdefaults]
default_realm = EXAMPLE.COM
[realms]
EXAMPLE.COM = {
kdc = kdc.example.com
admin_server = kdc.example.com
}
The default_realm line sets the default realm for
Kerberos applications. If you have several realms, add additional
statements to the [realms] section.
Also add a statement to this file that tells applications how to map
host names to a realm. For example, when connecting to a remote host,
the Kerberos library needs to know in which realm this host is
located. This must be configured in the
[domain_realms] section:
[domain_realm] .example.com = EXAMPLE.COM www.example.org = EXAMPLE.COM
This tells the library that all hosts in the
example.com DNS domains are in the
EXAMPLE.COM Kerberos realm. In addition, one
external host named www.example.org should also
be considered a member of the EXAMPLE.COM realm.
DNS-based Kerberos configuration makes heavy use of SRV records. See (RFC2052) A DNS RR for specifying the location of services at http://www.ietf.org.
The name of an SRV record, as far as Kerberos is concerned, is always
in the format _service._proto.realm, where realm is
the Kerberos realm. Domain names in DNS are case-insensitive, so
case-sensitive Kerberos realms would break when using this
configuration method. _service is a service name
(different names are used when trying to contact the KDC or the
password service, for example). _proto can be
either _udp or _tcp, but not all
services support both protocols.
The data portion of SRV resource records consists of a priority value, a weight, a port number, and a host name. The priority defines the order in which hosts should be tried (lower values indicate a higher priority). The weight value is there to support some sort of load balancing among servers of equal priority. You probably do not need any of this, so it is okay to set these to zero.
MIT Kerberos currently looks up the following names when looking for services:
This defines the location of the KDC daemon (the authentication and ticket granting server). Typical records look like this:
_kerberos._udp.EXAMPLE.COM. IN SRV 0 0 88 kdc.example.com. _kerberos._tcp.EXAMPLE.COM. IN SRV 0 0 88 kdc.example.com.
This describes the location of the remote administration service. Typical records look like this:
_kerberos-adm._tcp.EXAMPLE.COM. IN SRV 0 0 749 kdc.example.com.
Because kadmind does not support UDP, there should be no
_udp record.
As with the static configuration file, there is a mechanism to inform
clients that a specific host is in the EXAMPLE.COM
realm, even if it is not part of the example.com
DNS domain. This can be done by attaching a TXT record to
_kerberos.host_name, as shown here:
_kerberos.www.example.org. IN TXT "EXAMPLE.COM"
The clock skew is the tolerance for accepting tickets with time stamps that do not exactly match the host's system clock. Usually, the clock skew is set to 300 seconds (five minutes). This means a ticket can have a time stamp somewhere between five minutes behind and five minutes ahead of the server's clock.
When using NTP to synchronize all hosts, you can reduce this value to
about one minute. The clock skew value can be set in
/etc/krb5.conf like this:
[libdefaults]
clockskew = 60
To be able to add and remove principals from the Kerberos database
without accessing the KDC's console directly, tell the Kerberos
administration server which principals are allowed to do what by editing
/var/lib/kerberos/krb5kdc/kadm5.acl. The ACL
(access control list) file allows you to specify privileges with a
precise degree of control. For details, refer to the manual page with
man 8 kadmind.
For now, grant yourself the privilege to administer the database by putting the following line into the file:
geeko/admin *
Replace the user name exampleuserIII_plain with your own. Restart
kadmind for the change to take effect.
You should now be able to perform Kerberos administration tasks remotely using the kadmin tool. First, obtain a ticket for your admin role and use that ticket when connecting to the kadmin server:
tux > kadmin -p geeko/admin
Authenticating as principal geeko/admin@EXAMPLE.COM with password.
Password for geeko/admin@EXAMPLE.COM:
kadmin: getprivs
current privileges: GET ADD MODIFY DELETE
kadmin:
Using the getprivs command, verify which privileges
you have. The list shown above is the full set of privileges.
As an example, modify the principal exampleuserIII_plain:
tux > kadmin -p geeko/admin
Authenticating as principal geeko/admin@EXAMPLE.COM with password.
Password for geeko/admin@EXAMPLE.COM:
kadmin: getprinc geeko
Principal: geeko@EXAMPLE.COM
Expiration date: [never]
Last password change: Wed Jan 12 17:28:46 CET 2005
Password expiration date: [none]
Maximum ticket life: 0 days 10:00:00
Maximum renewable life: 7 days 00:00:00
Last modified: Wed Jan 12 17:47:17 CET 2005 (admin/admin@EXAMPLE.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 2
Key: vno 1, Triple DES cbc mode with HMAC/sha1, no salt
Key: vno 1, DES cbc mode with CRC-32, no salt
Attributes:
Policy: [none]
kadmin: modify_principal -maxlife "8 hours" geeko
Principal "geeko@EXAMPLE.COM" modified.
kadmin: getprinc geeko
Principal: geeko@EXAMPLE.COM
Expiration date: [never]
Last password change: Wed Jan 12 17:28:46 CET 2005
Password expiration date: [none]
Maximum ticket life: 0 days 08:00:00
Maximum renewable life: 7 days 00:00:00
Last modified: Wed Jan 12 17:59:49 CET 2005 (geeko/admin@EXAMPLE.COM)
Last successful authentication: [never]
Last failed authentication: [never]
Failed password attempts: 0
Number of keys: 2
Key: vno 1, Triple DES cbc mode with HMAC/sha1, no salt
Key: vno 1, DES cbc mode with CRC-32, no salt
Attributes:
Policy: [none]
kadmin:
This changes the maximum ticket life time to eight hours. For more
information about the kadmin command and the options
available, see the krb5-doc package or refer to
the man 8 kadmin manual page.
So far, only user credentials have been discussed. However,
Kerberos-compatible services usually need to authenticate themselves to
the client user, too. Therefore, special service principals must be
in the Kerberos database for each service offered in the realm.
For example, if ldap.example.com offers an LDAP service, you need a service
principal, ldap/ldap.example.com@EXAMPLE.COM, to
authenticate this service to all clients.
The naming convention for service principals is
SERVICE/HOSTNAME@REALM,
where HOSTNAME is the host's fully qualified
host name.
Valid service descriptors are:
|
Service Descriptor |
Service |
|---|---|
|
|
Telnet, RSH, SSH |
|
|
NFSv4 (with Kerberos support) |
|
|
HTTP (with Kerberos authentication) |
|
|
IMAP |
|
|
POP3 |
|
|
LDAP |
Service principals are similar to user principals, but have significant differences. The main difference between a user principal and a service principal is that the key of the former is protected by a password. When a user obtains a ticket-granting ticket from the KDC, they needs to type their password, so Kerberos can decrypt the ticket. It would be inconvenient for system administrators to obtain new tickets for the SSH daemon every eight hours or so.
Instead, the key required to decrypt the initial ticket for the service
principal is extracted by the administrator from the KDC only once and
stored in a local file called the keytab. Services
such as the SSH daemon read this key and use it to obtain new tickets
automatically, when needed. The default keytab file resides in
/etc/krb5.keytab.
To create a host service principal for jupiter.example.com
enter the following commands during your kadmin session:
tux > kadmin -p geeko/admin
Authenticating as principal geeko/admin@EXAMPLE.COM with password.
Password for geeko/admin@EXAMPLE.COM:
kadmin: addprinc -randkey host/jupiter.example.com
WARNING: no policy specified for host/jupiter.example.com@EXAMPLE.COM;
defaulting to no policy
Principal "host/jupiter.example.com@EXAMPLE.COM" created.
Instead of setting a password for the new principal, the
-randkey flag tells kadmin to
generate a random key. This is used here because no user interaction is
wanted for this principal. It is a server account for the machine.
Finally, extract the key and store it in the local keytab file
/etc/krb5.keytab. This file is owned by the
superuser, so you must be root
to execute the next command in the kadmin shell:
kadmin: ktadd host/jupiter.example.com Entry for principal host/jupiter.example.com with kvno 3, encryption type Triple DES cbc mode with HMAC/sha1 added to keytab WRFILE:/etc/krb5.keytab. Entry for principal host/jupiter.example.com with kvno 3, encryption type DES cbc mode with CRC-32 added to keytab WRFILE:/etc/krb5.keytab. kadmin:
When completed, make sure that you destroy the admin ticket obtained
with kinit above with kdestroy.
An incomplete Kerberos configuration may completely lock you out of
your system, including the root user. To prevent this, add the
ignore_unknown_principals directive to the
pam_krb5 module after you
have added the pam_krb5 module to the existing
PAM configuration files as described below.
tux >sudopam-config --add --krb5-ignore_unknown_principals
This will direct the pam_krb5 module to ignore
some errors that would otherwise cause the account phase to fail.
openSUSE® Leap comes with a PAM module named
pam_krb5, which supports Kerberos login and
password update. This module can be used by applications such as console
login, su, and graphical login applications like GDM.
That is, it can be used in all cases where the user enters a password
and expects the authenticating application to obtain an initial Kerberos
ticket on their behalf. To configure PAM support for Kerberos, use the
following command:
tux >sudopam-config --add --krb5
The above command adds the pam_krb5 module to the
existing PAM configuration files and makes sure it is called in the
right order. To make precise adjustments to the way in which
pam_krb5 is used, edit the file
/etc/krb5.conf and add default applications to
PAM. For details, refer to the manual page with
man 5 pam_krb5.
The pam_krb5 module was specifically not designed
for network services that accept Kerberos tickets as part of user
authentication. This is an entirely different matter, and is
discussed below.
OpenSSH supports Kerberos authentication in both protocol version 1 and 2. In version 1, there are special protocol messages to transmit Kerberos tickets. Version 2 does not use Kerberos directly anymore, but relies on GSSAPI, the General Security Services API. This is a programming interface that is not specific to Kerberos—it was designed to hide the peculiarities of the underlying authentication system, be it Kerberos, a public-key authentication system like SPKM, or others. However, the included GSSAPI library only supports Kerberos.
To use sshd with Kerberos authentication, edit
/etc/ssh/sshd_config and set the following options:
# These are for protocol version 1 # # KerberosAuthentication yes # KerberosTicketCleanup yes # These are for version 2 - better to use this GSSAPIAuthentication yes GSSAPICleanupCredentials yes
Then restart your SSH daemon using sudo systemctl restart
sshd.
To use Kerberos authentication with protocol version 2, enable it on the
client side as well. Do this either in the systemwide configuration file
/etc/ssh/ssh_config or on a per-user level by
editing ~/.ssh/config. In both cases, add the
option GSSAPIAuthentication yes.
You should now be able to connect using Kerberos authentication. Use
klist to verify that you have a valid ticket, then
connect to the SSH server. To force SSH protocol version 1, specify the
-1 option on the command line.
The file
/usr/share/doc/packages/openssh/README.kerberos
discusses the interaction of OpenSSH and Kerberos in more detail.
The GSSAPIKeyExchange mechanism (RFC 4462) is
supported. This directive specifies how host keys are exchanged. For
more information, see the sshd_config manual page (man
sshd_config).
When using Kerberos, one way to distribute the user information (such as user ID, groups, and home directory) in your local network is to use LDAP. This requires a strong authentication mechanism that prevents packet spoofing and other attacks. One solution is to use Kerberos for LDAP communication, too.
OpenLDAP implements most authentication flavors through SASL, the simple
authentication session layer. SASL is a network protocol designed for
authentication. The SASL implementation is cyrus-sasl, which supports
several authentication flavors. Kerberos authentication is
performed through GSSAPI (General Security Services API). By default,
the SASL plug-in for GSSAPI is not installed. Install the
cyrus-sasl-gssapi with YaST.
To enable Kerberos to bind to the OpenLDAP server, create a principal
ldap/ldap.example.com and add that to the keytab.
By default, the LDAP server slapd runs as user and group
ldap, while the keytab file is
readable by root only.
Therefore, either change the LDAP configuration so the server runs as
root or make the keytab file
readable by the group ldap.
The latter is done automatically by the OpenLDAP start script
(/usr/lib/openldap/start) if the keytab file has
been specified in the OPENLDAP_KRB5_KEYTAB variable in
/etc/sysconfig/openldap and the
OPENLDAP_CHOWN_DIRS variable is set to
yes, which is the default setting. If
OPENLDAP_KRB5_KEYTAB is left empty, the default keytab
under /etc/krb5.keytab is used and you must adjust
the privileges yourself as described below.
To run slapd as root, edit
/etc/sysconfig/openldap. Disable the
OPENLDAP_USER and
OPENLDAP_GROUP variables by putting a comment
character in front of them.
To make the keytab file readable by group LDAP, execute
tux >sudochgrp ldap /etc/krb5.keytabtux >sudochmod 640 /etc/krb5.keytab
A third (and maybe the best) solution is to tell OpenLDAP to use a special keytab file. To do this, start kadmin, and enter the following command after you have added the principal ldap/ldap.example.com:
tux >sudoktadd -k /etc/openldap/ldap.keytab ldap/ldap.example.com@EXAMPLE.COM
Then in the shell run:
tux >sudochown ldap:ldap /etc/openldap/ldap.keytabtux >sudochmod 600 /etc/openldap/ldap.keytab
To tell OpenLDAP to use a different keytab file, change the following
variable in /etc/sysconfig/openldap:
OPENLDAP_KRB5_KEYTAB="/etc/openldap/ldap.keytab"
Finally, restart the LDAP server using sudo systemctl
restart slapd.
You are now able to automatically use tools such as ldapsearch with Kerberos authentication.
tux > ldapsearch -b ou=people,dc=example,dc=com '(uid=geeko)'
SASL/GSSAPI authentication started
SASL SSF: 56
SASL installing layers
[...]
# geeko, people, example.com
dn: uid=geeko,ou=people,dc=example,dc=com
uid: geeko
cn: Suzanne Geeko
[...]
As you can see, ldapsearch prints a message that it
started GSSAPI authentication. The next message is very cryptic, but it
shows that the security strength factor (SSF for
short) is 56 (The value 56 is somewhat arbitrary. Most likely it was
chosen because this is the number of bits in a DES encryption key).
This means that GSSAPI authentication was successful and that
encryption is being used to protect integrity and provide
confidentiality for the LDAP connection.
In Kerberos, authentication is always mutual. This means that not only have you authenticated yourself to the LDAP server, but also the LDAP server has authenticated itself to you. In particular, this means communication is with the desired LDAP server, rather than some bogus service set up by an attacker.
There is one minor piece of the puzzle missing—how the LDAP
server can find out that the Kerberos user
tux@EXAMPLE.COM corresponds to the LDAP
distinguished name
uid=tux,ou=people,dc=example,dc=com.
This sort of mapping must be configured manually using the
saslExpr directive. In this example, the
"authz-regexp" change in LDIF would look as follows:
dn: cn=config add: olcAuthzRegexp olcAuthzRegexp: uid=(.*),cn=GSSAPI,cn=auth uid=$1,ou=people,dc=example,dc=com
All these changes can be applied via ldapmodify on
the command line.
When SASL authenticates a user, OpenLDAP forms a distinguished name
from the name given to it by SASL (such as tux) and the
name of the SASL flavor (GSSAPI). The result
would be
uid=tux,cn=GSSAPI,cn=auth.
If a authz-regexp has been configured, it checks the
DN formed from the SASL information using the first argument as a
regular expression. If this regular expression matches, the name is
replaced with the second argument of the
authz-regexp statement. The placeholder
$1 is replaced with the substring matched by the
(.*) expression.
More complicated match expressions are possible. If you have a more complicated directory structure or a schema in which the user name is not part of the DN, you can even use search expressions to map the SASL DN to the user DN.
For more information, see the slapd-config man page.
YaST includes the module that helps define authentication scenarios involving either LDAP or Kerberos.
It can also be used to join Kerberos and LDAP separately. However, in many such cases, using this module may not be the first choice, such as for joining Active Directory (which uses a combination of LDAP and Kerberos). For more information, see Section 4.2, “Configuring an Authentication Client with YaST”.
Start the module by selecting › .
To configure a Kerberos client, follow the procedure below:
In the window , click .
Choose the tab .
Click .
In the appearing dialog, specify the correct . Usually, the realm name is an uppercase version of the domain name. Additionally, you can specify the following:
To apply mappings from the realm name to the domain name, activate and/or .
You can specify the , the and additional .
All of these items are optional if they can be automatically
discovered via the SRV and
TXT records in DNS.
To manually map Principals to local user names, use .
You can also use auth_to_local rules to supply such
mappings using . For more information about using such rules, see
the official documentation at
https://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/krb5_conf.html#realms.
Continue with .
To add more realms, repeat from Step 2.
Enable Kerberos users logging in and creation of home directories by activating and .
If you left empty the optional text boxes in Step 3, make sure to enable automatic discovery of realms and key distribution centers by activating and .
You can additionally activate the following:
allows the encryption types listed as weak at http://web.mit.edu/kerberos/krb5-current/doc/admin/conf_files/kdc_conf.html#encryption-types.
allows forwarding of tickets.
allows the use of proxies between the computer of the user and the key distribution center.
allows granting tickets to users behind networks using network address translation.
To set up allowed encryption types and define the name of the keytab file which lists the names of principals and their encrypted keys, use the .
Finish with and .
YaST may now install extra packages.
Most NFS servers can export filesystems using any combination of
the default "trust the network" form of security known as
sec=sys, and three different levels of Kerberos-based
security, sec=krb5, sec=krb5i and
sec=krb5p. The sec option is set
as a mount option on the client. Is it often the case that NFS
service will first be configured and used with
sec=sys, and then Kerberos can be imposed afterwards.
In this case it is likely that the server will be configured to
support both sec=sys and one of the Kerberos levels,
and then after all clients have transitioned, the
sec=sys support will be removed, thus achieving
true security. The transition to Kerberos should be fairly transparent
if done in an orderly manner. However there is one subtle detail of
NFS behavior that works differently when Kerberos is used, and the
implications of this need to be understand and possibly addressed.
See Section 6.7.1, “Group Membership”.
The three Kerberos levels indicate different levels of security. With more security comes a need for more processor power to encrypt and decrypt messages. Choosing the right balance is an important consideration that should go in to planning a roll-out of Kerberos for NFS.
krb5 provides only authentication. The server
can know who sent a request, and the client can know that the
server did send a reply. No security is provided for the content of
the request or reply so an attacker with physical network access
could transform the request or reply, or both, in arbitrary ways to
deceive either server or client. They cannot directly read or
change any file that the authenticated user could not read or
change, but almost anything is theoretically
possible.
krb5i adds integrity checks to all messages.
With krb5i, an attacker cannot modify any
request or reply, but they can view all the data exchanged, and so
could discover the content of any file that is read.
krb5p adds privacy to the protocol. As well as
reliable authentication and integrity checking, messages are fully
encrypted so an attacker can only know that messages were exchanged
between client and server, and cannot extract other information
directly from the message. Whether information can be extracted
from message timing is a separate question that Kerberos does not
address.
The one behavioral difference between sec=sys and
the various Kerberos security levels that might be visible is related
to group membership. In Unix and Linux, each filesystem access
comes from a process which is owned by a particular user and has a
particular group owner, and a number of supplemental groups. Access
rights to files can vary based on the owner and the various groups.
With sec=sys, the user-id, group-id, and a list of
up to 16 supplemental groups are sent to the server in each
request.
If a user is a member of more that 16 supplemental groups, the extra groups are lost and some files may not be accessible over NFS that the user would normally expect to have access to. For this reason, most sites that use NFS find a way to limit all users to at most 16 supplemental groups.
If the user runs the newgrp command or runs a
set-group-id program, either of which can change the list of groups
they are a member of, these changes take effect immediately and
provide different accesses over NFS.
With Kerberos, group information is not sent in requests at all. Only the user is identified (using a Kerberos "Principal"), and the server performs a lookup to determine the user ID and group list for that principal. This means that if the user is a member of more than 16 groups, all of these group memberships will be used in determining file access permissions. However it also means that if the user changes a group-id on the client in some way, the server will not notice the change and will not take that it into account in determining access rights.
Is most cases, the improvement of having access to more groups brings a real benefit, and the loss of not being able to change groups is not noticed as it is not widely used. A site administrator considering the use of Kerberos should be aware of the difference though and ensure that it will not actually cause problems.
Using Kerberos for security requires extra CPU power for encrypting and decrypting messages. How much extra CPU power is required and whether the difference is noticeable will vary with different hardware and different applications. If the server or client are already saturating the available CPU power, it is likely that a performance drop will be measurable when switching from sec=sys to Kerberos. If there is spare CPU capacity available, it is quite possible that the transition will not result in any throughput change. The only way to be sure how much impact the use of Kerberos will have is to test your load on your hardware.
The only configuration options that might reduce the load will also
reduce the quality of the protection offered.
sec=krb5 should produce noticeably less load than
sec=krb5p but, as discussed above, it doesn't
produce very strong security. Similarly it is possible to adjust
the list of ciphers that Kerberos can choose from, and this might
change the CPU requirement. However the defaults are carefully
chosen and should not be changed without similar careful
consideration.
The other possible performance issue when configuring NFS to use Kerberos involves availability of the Kerberos authentication servers, known as the KDC or Key Distribution Center.
The use of NFS adds load to such servers to the same degree that
adding the use of Kerberos for any other services adds some load.
Every time a given user (Kerberos principal) establishes a
session with a service, for example by accessing files
exported by a particular NFS server, the client needs to negotiate
with the KDC. Once a session key has been negotiated, the client
server can communicate without further help for many hours,
depending on details of the Kerberos configuration, particularly the
ticket_lifetime setting.
The concerns most likely to affect the provisioning of Kerberos KDC servers are availability and peak usage.
As with other core services such as DNS, LDAP or similar
name-lookup services, having two servers that are reasonably
"close" to every client provides good availability for modest
resources. Kerberos allows for multiple KDC servers with flexible
models for database propagation, so distributing servers as needed
around campuses, buildings, and even cabinets is fairly straight
forward. The best mechanism to ensure each client finds a near-by
Kerberos server is to use split-horizon DNS with each building (or
similar) getting different details from the DNS server. If this is
not possible, then managing the /etc/krb5.conf
file to be different at different locations is a suitable
alternative.
As access to the Kerberos KDC is infrequent, load is only likely to be a problem at peak times. If thousands of people all log in between 9:00 and 9:05, then the servers will receive many more requests-per-minute than they might in the middle of the night. The load on the Kerberos server is likely to be more than that on an LDAP server, but not orders of magnitude more. A sensible guideline is to provision Kerberos replicas in the same manner that you provision LDAP replicas, and then monitor performance to determine if demand ever exceeds capacity.
One service of the Kerberos KDC that is not easily distributed is the handling of updates, such as password changes and new user creation. These much happen at a single master KDC.
These updates are not likely to happen with such frequency that any significant load will be generated, but availability could be an issue. It can be annoying if you want to create a new user or change a password, and the master KDC on the other side of the world is temporarily unavailable.
When an organization is geographically distributed and has a policy of handling administration tasks locally at each site, it can be beneficial to create multiple Kerberos domains, one for each administrative center. Each domain would then have its own master KDC which would be geographically local. Users in one domain can still get access to resources in another domain by setting up trust relationships between domains.
The easiest arrangement for multiple domains is to have a global domain (e.g. EXAMPLE.COM) and various local domains (e.g. ASIA.EXAMPLE.COM, EUROPE.EXAMPLE.COM, etc). If the global domain is configured to trust each local domain, and each local domain is configured to trust the global domain, then fully transitive trust is available between any pair of domains, and any principal can establish a secure connection with any service. Ensuring appropriate access rights to resources, for example files, provided by that service will be dependent on the user name lookup service used, and the functionality of the NFS file server, and is beyond the scope of this document.
The official site of MIT Kerberos is http://web.mit.edu/kerberos. There, find links to any other relevant resource concerning Kerberos, including Kerberos installation, user, and administration guides.
The book Kerberos—A Network Authentication System by Brian Tung (ISBN 0-201-37924-4) offers extensive information.
Active Directory* (AD) is a directory-service based on LDAP, Kerberos, and other services. It is used by Microsoft* Windows* to manage resources, services, and people. In a Microsoft Windows network, Active Directory provides information about these objects, restricts access to them, and enforces policies. openSUSE® Leap lets you join existing Active Directory domains and integrate your Linux machine into a Windows environment.
With a Linux client (configured as an Active Directory client) that is joined to an existing Active Directory domain, benefit from various features not available on a pure openSUSE Leap Linux client:
GNOME Files (previously called Nautilus) supports browsing shared resources through SMB.
GNOME Files supports sharing directories and files as in Windows.
Through GNOME Files, users can access their Windows user data and can edit, create, and delete files and directories on the Windows server. Users can access their data without having to enter their password multiple times.
Users can log in and access their local data on the Linux machine even if they are offline or the Active Directory server is unavailable for other reasons.
This port of Active Directory support in Linux enforces corporate password policies
stored in Active Directory. The display managers and console support
password change messages and accept your input. You can even use the
Linux passwd command to set Windows passwords.
Many desktop applications are Kerberos-enabled (kerberized), which means they can transparently handle authentication for the user without the need for password reentry at Web servers, proxies, groupware applications, or other locations.
In Windows Server 2016 and later, Microsoft has removed the role IDMU/NIS Server and along with it the Unix Attributes plug-in for the Active Directory Users and Computers MMC snap-in.
However, Unix attributes can still be managed manually when are enabled in the Active Directory Users and Computers MMC snap-in. For more information, see https://blogs.technet.microsoft.com/activedirectoryua/2016/02/09/identity-management-for-unix-idmu-is-deprecated-in-windows-server/.
Alternatively, use the method described in Procedure 7.1, “ Joining an Active Directory Domain Using ” to complete attributes on the client side (in particular, see Step 6.c).
The following section contains technical background for most of the previously named features. For more information about file and printer sharing using Active Directory, see GNOME User Guide.
Many system components need to interact flawlessly to integrate a Linux client into an existing Windows Active Directory domain. The following sections focus on the underlying processes of the key events in Active Directory server and client interaction.
To communicate with the directory service, the client needs to share at least two protocols with the server:
LDAP is a protocol optimized for managing directory information. A Windows domain controller with Active Directory can use the LDAP protocol to exchange directory information with the clients. To learn more about LDAP in general and about the open source port of it, OpenLDAP, refer to Chapter 5, LDAP—A Directory Service.
Kerberos is a third-party trusted authentication service. All its clients trust Kerberos's authorization of another client's identity, enabling kerberized single-sign-on (SSO) solutions. Windows supports a Kerberos implementation, making Kerberos SSO possible even with Linux clients. To learn more about Kerberos in Linux, refer to Chapter 6, Network Authentication with Kerberos.
Depending on which YaST module you use to set up Kerberos authentication, different client components process account and authentication data:
The sssd daemon is the
central part of this solution. It handles all communication with the
Active Directory server.
To gather name service information,
sssd_nss is used.
To authenticate users, the
pam_sss module for PAM
is used. The creation of user homes for the Active Directory users on the Linux
client is handled by pam_mkhomedir.
For more information about PAM, see Chapter 2, Authentication with PAM.
The winbindd daemon is the
central part of this solution. It handles all communication with the
Active Directory server.
To gather name service information,
nss_winbind is used.
To authenticate users, the
pam_winbind module for PAM
is used. The creation of user homes for the Active Directory users on the Linux
client is handled by pam_mkhomedir.
For more information about PAM, see Chapter 2, Authentication with PAM.
Figure 7.1, “Schema of Winbind-based Active Directory Authentication” highlights the most prominent components of Winbind-based Active Directory authentication.
Applications that are PAM-aware, like the login routines and the GNOME display manager, interact with the PAM and NSS layer to authenticate against the Windows server. Applications supporting Kerberos authentication (such as file managers, Web browsers, or e-mail clients) use the Kerberos credential cache to access user's Kerberos tickets, making them part of the SSO framework.
During domain join, the server and the client establish a secure relation. On the client, the following tasks need to be performed to join the existing LDAP and Kerberos SSO environment provided by the Windows domain controller. The entire join process is handled by the YaST Domain Membership module, which can be run during installation or in the installed system:
The Windows domain controller providing both LDAP and KDC (Key Distribution Center) services is located.
A machine account for the joining client is created in the directory service.
An initial ticket granting ticket (TGT) is obtained for the client and stored in its local Kerberos credential cache. The client needs this TGT to get further tickets allowing it to contact other services, like contacting the directory server for LDAP queries.
NSS and PAM configurations are adjusted to enable the client to authenticate against the domain controller.
During client boot, the winbind daemon is started and retrieves the initial Kerberos ticket for the machine account. winbindd automatically refreshes the machine's ticket to keep it valid. To keep track of the current account policies, winbindd periodically queries the domain controller.
The login manager of GNOME (GDM) has been extended to allow the handling of Active Directory domain login. Users can choose to log in to the primary domain the machine has joined or to one of the trusted domains with which the domain controller of the primary domain has established a trust relationship.
User authentication is mediated by several PAM modules as described in Section 7.2, “Background Information for Linux Active Directory Support”. If there are errors, the error codes are translated into user-readable error messages that PAM gives at login through any of the supported methods (GDM, console, and SSH):
Password has expired
The user sees a message stating that the password has expired and needs to be changed. The system prompts for a new password and informs the user if the new password does not comply with corporate password policies (for example the password is too short, too simple, or already in the history). If a user's password change fails, the reason is shown and a new password prompt is given.
Account disabled
The user sees an error message stating that the account has been disabled and to contact the system administrator.
Account locked out
The user sees an error message stating that the account has been locked and to contact the system administrator.
Password has to be changed
The user can log in but receives a warning that the password needs to be changed soon. This warning is sent three days before that password expires. After expiration, the user cannot log in.
Invalid workstation
When a user is restricted to specific workstations and the current openSUSE Leap machine is not among them, a message appears that this user cannot log in from this workstation.
Invalid logon hours
When a user is only allowed to log in during working hours and tries to log in outside working hours, a message informs the user that logging in is not possible at that time.
Account expired
An administrator can set an expiration time for a specific user account. If that user tries to log in after expiration, the user gets a message that the account has expired and cannot be used to log in.
During a successful authentication, the client acquires a ticket granting ticket (TGT) from the Kerberos server of Active Directory and stores it in the user's credential cache. It also renews the TGT in the background, requiring no user interaction.
openSUSE Leap supports local home directories for Active Directory users. If configured through YaST as described in Section 7.3, “Configuring a Linux Client for Active Directory”, user home directories are created when a Windows/Active Directory user first logs in to the Linux client. These home directories look and feel identical to standard Linux user home directories and work independently of the Active Directory Domain Controller.
Using a local user home, it is possible to access a user's data on this machine (even when the Active Directory server is disconnected) as long as the Linux client has been configured to perform offline authentication.
Users in a corporate environment must have the ability to become roaming users (for example, to switch networks or even work disconnected for some time). To enable users to log in to a disconnected machine, extensive caching was integrated into the winbind daemon. The winbind daemon enforces password policies even in the offline state. It tracks the number of failed login attempts and reacts according to the policies configured in Active Directory. Offline support is disabled by default and must be explicitly enabled in the YaST Domain Membership module.
When the domain controller has become unavailable, the user can still access network resources (other than the Active Directory server itself) with valid Kerberos tickets that have been acquired before losing the connection (as in Windows). Password changes cannot be processed unless the domain controller is online. While disconnected from the Active Directory server, a user cannot access any data stored on this server. When a workstation has become disconnected from the network entirely and connects to the corporate network again later, openSUSE Leap acquires a new Kerberos ticket when the user has locked and unlocked the desktop (for example, using a desktop screen saver).
Before your client can join an Active Directory domain, some adjustments must be made to your network setup to ensure the flawless interaction of client and server.
Configure your client machine to use a DNS server that can forward DNS requests to the Active Directory DNS server. Alternatively, configure your machine to use the Active Directory DNS server as the name service data source.
To succeed with Kerberos authentication, the client must have its time set accurately. It is highly recommended to use a central NTP time server for this purpose (this can be also the NTP server running on your Active Directory domain controller). If the clock skew between your Linux host and the domain controller exceeds a certain limit, Kerberos authentication fails and the client is logged in using the weaker NTLM (NT LAN Manager) authentication. For more details about using Active Directory for time synchronization, see Procedure 7.2, “ Joining an Active Directory Domain Using ”.
To browse your network neighborhood, either disable the firewall entirely or mark the interface used for browsing as part of the internal zone.
To change the firewall settings on your client, log in as
root and start the YaST firewall module. Select
. Select your network interface from the
list of interfaces and click . Select
and apply your settings with
. Leave the firewall settings with › . To
disable the firewall, check the option, and leave the firewall module with
› .
You cannot log in to an Active Directory domain unless the Active Directory administrator has provided you with a valid user account for that domain. Use the Active Directory user name and password to log in to the Active Directory domain from your Linux client.
YaST contains multiple modules that allow connecting to an Active Directory:
. Use both an identity service (usually LDAP) and a user authentication service (usually Kerberos). This option is based on SSSD and in the majority of cases is best suited for joining Active Directory domains.
This module is described in Section 7.3.2, “ Joining Active Directory Using ”.
.
Join an Active Directory (which entails use of Kerberos and LDAP). This option is
based on winbind and is best suited for joining an
Active Directory domain if support for NTLM or cross-forest trusts is necessary.
This module is described in Section 7.3.3, “ Joining Active Directory Using ”.
. Allows setting up LDAP identities and Kerberos authentication independently from each other and provides fewer options. While this module also uses SSSD, it is not as well suited for connecting to Active Directory as the previous two options.
This module is described in:
The YaST module supports authentication at an Active Directory. Additionally, it also supports the following related authentication and identification providers:
. Support for legacy NSS providers via a proxy.
. FreeIPA and Red Hat Enterprise Identity Management provider.
.
An LDAP provider. For more information about configuring LDAP, see
man 5 sssd-ldap.
. An SSSD-internal provider for local users.
. Relay authentication to another PAM target via a proxy.
. FreeIPA and Red Hat Enterprise Identity Management provider.
. An LDAP provider.
. Kerberos authentication.
. An SSSD-internal provider for local users.
. Disables authentication explicitly.
To join an Active Directory domain using SSSD and the module of YaST, proceed as follows:
Open YaST.
To be able to use DNS auto-discovery later, set up the Active Directory Domain Controller (the Active Directory server) as the name server for your client.
In YaST, click .
Select , then enter the IP address of the Active Directory Domain Controller into the text box .
Save the setting with .
From the YaST main window, start the module .
The module opens with an overview showing different network properties of your computer and the authentication method currently in use.
To start editing, click .
Now join the domain.
Click .
In the appearing dialog, specify the correct . Then specify the services to use for identity data and authentication: Select for both.
Ensure that is activated.
Click .
(Optional) Usually, you can keep the default settings in the following dialog. However, there are reasons to make changes:
If the Local Host Name Does Not Match the Host Name Set on the
Domain Controller
.
Find out if the host name of your computer matches what the name
your computer is known as to the Active Directory Domain Controller. In a
terminal, run the command hostname, then compare
its output to the configuration of the Active Directory Domain Controller.
If the values differ, specify the host name from the Active Directory configuration under . Otherwise, leave the appropriate text box empty.
If You Do Not Want to Use DNS Auto-Discovery. Specify the that you want to use. If there are multiple Domain Controllers, separate their host names with commas.
To continue, click .
If not all software is installed already, the computer will now install missing software. It will then check whether the configured Active Directory Domain Controller is available.
If everything is correct, the following dialog should now show that it has discovered an but that you are .
In the dialog, specify the and
of the Active Directory administrator account
(usually Administrator).
To make sure that the current domain is enabled for Samba, activate .
To enroll, click .
You should now see a message confirming that you have enrolled successfully. Finish with .
After enrolling, configure the client using the window .
To allow logging in to the computer using login data provided by Active Directory, activate .
(Optional)
Optionally, under ,
activate additional data sources such as information on which users are
allowed to use sudo or which network drives are
available.
To allow Active Directory users to have home directories, activate . The path for home directories can be set in multiple ways—on the client, on the server, or both ways:
To configure the home directory paths on the Domain Controller, set
an appropriate value for the attribute
UnixHomeDirectory for each user. Additionally,
make sure that this attribute replicated to the global catalog. For
information on achieving that under Windows, see
https://support.microsoft.com/en-us/kb/248717.
To configure home directory paths on the client in such a way that
precedence will be given to the path set on the domain controller,
use the option fallback_homedir.
To configure home directory paths on the client in such a way that
the client setting will override the server setting, use
override_homedir.
As settings on the Domain Controller are outside of the scope of this documentation, only the configuration of the client-side options will be described in the following.
From the side bar, select
› , then click
. From that window, select either
fallback_homedir or
override_homedir, then click .
Specify a value. To have home directories follow the format
/home/USER_NAME, use
/home/%u.
For more information about possible variables, see the man page
sssd.conf (man 5 sssd.conf),
section
override_homedir.
Click .
Save the changes by clicking . Then make sure that the values displayed now are correct. To leave the dialog, click .
To join an Active Directory domain using winbind and the
module of YaST, proceed as
follows:
Log in as root and start YaST.
Start › .
Enter the domain to join at in
the screen (see
Figure 7.5, “Determining Windows Domain Membership”). If the DNS settings on your host
are properly integrated with the Windows DNS server, enter the Active Directory
domain name in its DNS format
(mydomain.mycompany.com). If you enter the short
name of your domain (also known as the pre–Windows 2000 domain
name), YaST must rely on NetBIOS name resolution instead of DNS to
find the correct domain controller.
To use the SMB source for Linux authentication, activate .
To automatically create a local home directory for Active Directory users on the Linux machine, activate .
Check to allow your domain users to log in even if the Active Directory server is temporarily unavailable, or if you do not have a network connection.
To change the UID and GID ranges for the Samba users and groups, select . Let DHCP retrieve the WINS server only if you need it. This is the case when some machines are resolved only by the WINS system.
Configure NTP time synchronization for your Active Directory environment by selecting and entering an appropriate server name or IP address. This step is obsolete if you have already entered the appropriate settings in the stand-alone YaST NTP configuration module.
Click and confirm the domain join when prompted for it.
Provide the password for the Windows administrator on the Active Directory server and click (see Figure 7.6, “Providing Administrator Credentials”).
After you have joined the Active Directory domain, you can log in to it from your workstation using the display manager of your desktop or the console.
Joining a domain may not succeed if the domain name ends with
.local. Names ending in .local
cause conflicts with Multicast DNS (MDNS) where
.local is reserved for link-local host names.
Only a domain administrator account, such as
Administrator, can join openSUSE Leap into Active
Directory.
To check whether you are successfully enrolled in an Active Directory domain, use the following commands:
klist shows whether the current user has a valid
Kerberos ticket.
getent passwd shows published LDAP data for all
users.
Provided your machine has been configured to authenticate against Active Directory and you have a valid Windows user identity, you can log in to your machine using the Active Directory credentials. Login is supported for GNOME, the console, SSH, and any other PAM-aware application.
openSUSE Leap supports offline authentication, allowing you to log in to your client machine even when it is offline. See Section 7.2.3, “Offline Service and Policy Support” for details.
To authenticate a GNOME client machine against an Active Directory server, proceed as follows:
Click .
In the text box ,
enter the domain name and the Windows user name in this form:
DOMAIN_NAME\USER_NAME.
Enter your Windows password.
If configured to do so, openSUSE Leap creates a user home directory on the local machine on the first login of each user authenticated via Active Directory. This allows you to benefit from the Active Directory support of openSUSE Leap while still having a fully functional Linux machine at your disposal.
Besides logging in to the Active Directory client machine using a graphical front-end, you can log in using the text-based console or even remotely using SSH.
To log in to your Active Directory client from a console, enter
DOMAIN_NAME\USER_NAME
at the login: prompt and provide the password.
To remotely log in to your Active Directory client machine using SSH, proceed as follows:
At the login prompt, enter:
tux > ssh DOMAIN_NAME\\USER_NAME@HOST_NAME
The \ domain and login delimiter is escaped with
another \ sign.
Provide the user's password.
openSUSE Leap helps the user choose a suitable new password that meets the corporate security policy. The underlying PAM module retrieves the current password policy settings from the domain controller, informing the user about the specific password quality requirements a user account typically has by means of a message on login. Like its Windows counterpart, openSUSE Leap presents a message describing:
Password history settings
Minimum password length requirements
Minimum password age
Password complexity
The password change process cannot succeed unless all requirements have been successfully met. Feedback about the password status is given both through the display managers and the console.
GDM provides feedback about password expiration and the prompt for new passwords in an interactive mode. To change passwords in the display managers, provide the password information when prompted.
To change your Windows password, you can use the standard Linux utility,
passwd, instead of having to manipulate this data on
the server. To change your Windows password, proceed as follows:
Log in at the console.
Enter passwd.
Enter your current password when prompted.
Enter the new password.
Reenter the new password for confirmation. If your new password does not comply with the policies on the Windows server, this feedback is given to you and you are prompted for another password.
To change your Windows password from the GNOME desktop, proceed as follows:
Click the icon on the left edge of the panel.
Select .
From the section, select › .
Enter your old password.
Enter and confirm the new password.
Leave the dialog with to apply your settings.
The YaST module offers a central clearinghouse to configure security-related settings for openSUSE Leap. Use it to configure security aspects such as settings for the login procedure and for password creation, for boot permissions, user creation or for default file permissions. Launch it from the YaST control center by › . The dialog always starts with the , and other configuration dialogs are available from the right pane.
PolKit (formerly known as PolicyKit) is an application framework that
acts as a negotiator between the unprivileged user session and the
privileged system context. Whenever a process from the user session
tries to carry out an action in the system context, PolKit is queried.
Based on its configuration—specified in a so-called
“policy”—the answer could be “yes”,
“no”, or “needs authentication”. Unlike
classical privilege authorization programs such as sudo, PolKit does
not grant root permissions to an entire session, but only to
the action in question.
POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more flexibly than with the traditional permission concept.
Encrypting files, partitions, and entire disks prevents unauthorized access to your data and protects your confidential files and documents.
Certificates play an important role in the authentication of companies and individuals. Usually certificates are administered by the application itself. In some cases, it makes sense to share certificates between applications. The certificate store is a common ground for Firefox, Evolution, and NetworkManager. This chapter explains some details.
Securing your systems is a mandatory task for any mission-critical
system administrator. Because it is impossible to always guarantee that
the system is not compromised, it is very important to do extra checks
regularly (for example with
cron) to ensure that the system
is still under your control. This is where AIDE, the
Advanced Intrusion Detection Environment, comes
into play.
The YaST module offers a central clearinghouse to configure security-related settings for openSUSE Leap. Use it to configure security aspects such as settings for the login procedure and for password creation, for boot permissions, user creation or for default file permissions. Launch it from the YaST control center by › . The dialog always starts with the , and other configuration dialogs are available from the right pane.
The displays a comprehensive list of the most important security settings for your system. The security status of each entry in the list is clearly visible. A green check mark indicates a secure setting while a red cross indicates an entry as being insecure. Click to open an overview of the setting and information on how to make it secure. To change a setting, click the corresponding link in the Status column. Depending on the setting, the following entries are available:
Click this entry to toggle the status of the setting to either enabled or disabled.
Click this entry to launch another YaST module for configuration. You will return to the Security Overview when leaving the module.
A setting's status is set to unknown when the associated service is not installed. Such a setting does not represent a potential security risk.
openSUSE Leap comes with three . These configurations affect all the settings available in the module. Each configuration can be modified to your needs using the dialogs available from the right pane changing its state to :
A configuration for a workstation with any kind of network connection (including a connection to the Internet).
This setting is designed for a laptop or tablet that connects to different networks.
Security settings designed for a machine providing network services such as a Web server, file server, name server, etc. This set provides the most secure configuration of the predefined settings.
A pre-selected (when opening the dialog) indicates that one of the predefined sets has been modified. Actively choosing this option does not change the current configuration—you will need to change it using the .
Passwords that are easy to guess are a major security issue. The dialog provides the means to ensure that only secure passwords can be used.
By activating this option, a warning will be issued if new passwords appear in a dictionary, or if they are proper names (proper nouns).
If the user chooses a password with a length shorter than specified here, a warning will be issued.
When password expiration is activated (via ), this setting stores the given number of a user's previous passwords, preventing their reuse.
Choose a password encryption algorithm. Normally there is no need to change the default (Blowfish).
Activate password expiration by specifying a minimum and a maximum
time limit (in days). By setting the minimum age to a value greater
than 0 days, you can prevent users from immediately
changing their passwords again (and in doing so circumventing the
password expiration). Use the values 0 and
99999 to deactivate password expiration.
When a password expires, the user receives a warning in advance. Specify the number of days prior to the expiration date that the warning should be issued.
Configure which users can shut down the machine via the graphical login manager in this dialog. You can also specify how Ctrl–Alt–Del will be interpreted and who can hibernate the system.
This dialog lets you configure security-related login settings:
To make it difficult to guess a user's password by repeatedly logging in, it is recommended to delay the display of the login prompt that follows an incorrect login. Specify the value in seconds. Make sure that users who have mistyped their passwords do not need to wait too long.
When checked, the graphical login manager (GDM) can be accessed from the network. This is a potential security risk.
Set minimum and maximum values for user and group IDs. These default settings would rarely need to be changed.
Other security settings that do not fit the above-mentioned categories are listed here:
openSUSE Leap comes with three predefined sets of file permissions
for system files. These permission sets define whether a regular user
may read log files or start certain programs.
file permissions are suitable for stand-alone machines. These settings
allow regular users to, for example, read most system files. See the
file /etc/permissions.easy for the complete
configuration. The file permissions are
designed for multiuser machines with network access. A thorough
explanation of these settings can be found in
/etc/permissions.secure. The
settings are the most restrictive ones and
should be used with care. See
/etc/permissions.paranoid for more information.
The program updatedb scans the system and creates a
database of all file locations which can be queried with the command
locate. When updatedb is run as
user nobody, only world-readable files will be added to the database.
When run as user root, almost all files (except the ones root
is not allowed to read) will be added.
The magic SysRq key is a key combination that enables you to have some control over the system even when it has crashed. The complete documentation can be found at https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html.
PolKit (formerly known as PolicyKit) is an application framework that
acts as a negotiator between the unprivileged user session and the
privileged system context. Whenever a process from the user session
tries to carry out an action in the system context, PolKit is queried.
Based on its configuration—specified in a so-called
“policy”—the answer could be “yes”,
“no”, or “needs authentication”. Unlike
classical privilege authorization programs such as sudo, PolKit does
not grant root permissions to an entire session, but only to
the action in question.
PolKit works by limiting specific actions by users, by group, or by name. It then defines how those users are allowed to perform this action.
When a user starts a session (using the graphical environment or on the console), each session consists of the authority and an authentication agent. The authority is implemented as a service on the system message bus, whereas the authentication agent is used to authenticate the current user, which started the session. The current user needs to prove their authenticity, for example, using a passphrase.
Each desktop environment has its own authentication agent. Usually it is started automatically, whatever environment you choose.
PolKit's configuration depends on actions and authorization rules:
*.policy)
Written as XML files and located in
/usr/share/polkit-1/actions. Each file defines
one or more actions, and each action contains descriptions and
default permissions. Although a system administrator can write their
own rules, PolKit's files must not be edited.
*.rules)
Written as JavaScript files and located in two places:
/usr/share/polkit-1/rules.d is used for third
party packages and /etc/polkit-1/rules.d for
local configurations. Each rule file refers to the action specified
in the action file. A rule determines what restrictions are allowed
to a subset of users. For example, a rule file could overrule a
restrictive permission and allow some users to allow it.
PolKit contains several commands for specific tasks (see also the specific man page for further details):
pkaction
Get details about a defined action. See Section 9.3, “Querying Privileges” for more information.
pkcheck
Checks whether a process is authorized, specified by either
--process or --system-bus-name.
pkexec
Allows an authorized user to execute the specific program as another user.
pkttyagent
Starts a textual authentication agent. This agent is used if a desktop environment does not have its own authentication agent.
At the moment, not all applications requiring privileges use PolKit. Find the most important policies available on openSUSE® Leap below, sorted into the categories where they are used.
| Set scheduling priorities for the PulseAudio daemon |
| Add, remove, edit, enable or disable printers |
| Modify schedule |
| Modify system and mandatory values with GConf |
| Change the system time |
| Manage and monitor local virtualized systems |
| Apply and modify connections |
| Read and change privileges for other users |
| Modify defaults |
| Update and remove packages |
| Change and refresh repositories |
| Install local files |
| Rollback |
| Import repository keys |
| Accepting EULAs |
| Setting the network proxy |
| Wake on LAN |
| Mount or unmount fixed, hotpluggable and encrypted devices |
| Eject and decrypt removable media |
| Enable or disable WLAN |
| Enable or disable Bluetooth |
| Device access |
| Stop, suspend, hibernate and restart the system |
| Undock a docking station |
| Change power-management settings |
| Register product |
| Change the system time and language |
Every time a PolKit-enabled process carries out a privileged operation,
PolKit is asked whether this process is entitled to do so. PolKit
answers according to the policy defined for this process. The answers can
be yes, no, or
authentication needed. By default, a policy contains
implicit privileges, which automatically apply to all
users. It is also possible to specify explicit
privileges which apply to a specific user.
Implicit privileges can be defined for any active and inactive sessions. An active session is the one in which you are currently working. It becomes inactive when you switch to another console for example. When setting implicit privileges to “no”, no user is authorized, whereas “yes” authorizes all users. However, usually it is useful to demand authentication.
A user can either authorize by authenticating as root or by
authenticating as self. Both authentication methods exist in four
variants:
The user always needs to authenticate.
The authentication is bound to the instance of the program currently running. After the program is restarted, the user is required to authenticate again.
The authentication dialog offers a check button . If checked, the authentication is valid until the user logs out.
The authentication dialog offers a check button . If checked, the user needs to authenticate only once.
Explicit privileges can be granted to specific users. They can either be granted without limitations, or, when using constraints, limited to an active session and/or a local console.
It is not only possible to grant privileges to a user, a user can also be blocked. Blocked users cannot carry out an action requiring authorization, even though the default implicit policy allows authorization by authentication.
Each application supporting PolKit comes with a default set of implicit policies defined by the application's developers. Those policies are the so-called “upstream defaults”. The privileges defined by the upstream defaults are not necessarily the ones that are activated by default on SUSE systems. openSUSE Leap comes with a predefined set of privileges that override the upstream defaults:
/etc/polkit-default-privs.standard
Defines privileges suitable for most desktop systems
/etc/polkit-default-privs.restrictive
Designed for machines administrated centrally
To switch between the two sets of default privileges, adjust the value
of POLKIT_DEFAULT_PRIVS to either
restrictive or standard in
/etc/sysconfig/security. Then run the command
set_polkit_default_privs as root.
Do not modify the two files in the list above. To define your
own custom set of privileges, use
/etc/polkit-default-privs.local. For details, refer
to
Section 9.4.3, “Modifying Configuration Files for Implicit Privileges”.
To query privileges use the command pkaction included
in PolKit.
PolKit comes with command line tools for changing privileges and
executing commands as another user (see
Section 9.1.3, “Available Commands” for a short
overview). Each existing policy has a speaking, unique name with which it
can be identified. List all available policies with the command
pkaction.
When invoked with no parameters, the command pkaction
lists all policies. By adding the
--show-overrides option, you can list all policies that
differ from the default values. To reset the privileges for a given
action to the (upstream) defaults, use the option
--reset-defaults ACTION.
See man pkaction for more information.
If you want to display the needed authorization for a given policy (for
example, org.freedesktop.login1.reboot) use
pkaction as follows:
tux > pkaction -v --action-id org.freedesktop.login1.reboot
org.freedesktop.login1.reboot:
description: Reboot the system
message: Authentication is required to allow rebooting the system
vendor: The systemd Project
vendor_url: http://www.freedesktop.org/wiki/Software/systemd
icon:
implicit any: auth_admin_keep
implicit inactive: auth_admin_keep
implicit active: yes
The keyword auth_admin_keep means that users need to
enter a passphrase.
pkaction on openSUSE Leap
pkaction always operates on the upstream defaults.
Therefore it cannot be used to list or restore the defaults shipped with
openSUSE Leap. To do so, refer to
Section 9.5, “Restoring the Default Privileges”.
Adjusting privileges by modifying configuration files is useful when you want to deploy the same set of policies to different machines, for example to the computers of a specific team. It is possible to change implicit and explicit privileges by modifying configuration files.
The available actions depend on what additional packages you have
installed on your system. For a quick overview, use
pkaction to list all defined rules.
To get an idea, the following example describes how the command
gparted (“GNOME Partition Editor”)
is integrated into PolKit.
The file
/usr/share/polkit-1/actions/org.opensuse.policykit.gparted.policy
contains the following content:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN" "http://www.freedesktop.org/standards/PolicyKit/1.0/policyconfig.dtd"> <policyconfig> 1 <action id="org.opensuse.policykit.gparted"> 2 <message>Authentication is required to run the GParted Partition Editor</message> <icon_name>gparted</icon_name> <defaults> 3 <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> < allow_active>auth_admin</allow_active> </defaults> <annotate 4 key="org.freedesktop.policykit.exec.path">/usr/sbin/gparted</annotate> <annotate 4 key="org.freedesktop.policykit.exec.allow_gui">true</annotate> </action> </policyconfig>
Root element of the policy file. | |
Contains one single action. | |
The | |
The |
To add your own policy, create a .policy file with
the structure above, add the appropriate value into the
id attribute, and define the default permissions.
Your own authorization rules overrule the default settings. To add your
own settings, store your files under
/etc/polkit-1/rules.d/.
The files in this directory start with a two-digit number, followed by a
descriptive name, and end with .rules. Functions
inside these files are executed in the order they have been sorted in.
For example, 00-foo.rules is sorted (and hence
executed) before 60-bar.rules or even
90-default-privs.rules.
Inside the file, the script checks for the specified action ID, which is
defined in the .policy file. For example, if you
want to allow the command gparted to be executed by
any member of the admin
group, check for the action ID
org.opensuse.policykit.gparted:
/* Allow users in admin group to run GParted without authentication */
polkit.addRule(function(action, subject) {
if (action.id == "org.opensuse.policykit.gparted" &&
subject.isInGroup("admin")) {
return polkit.Result.YES;
}
});Find the description of all classes and methods of the functions in the PolKit API at http://www.freedesktop.org/software/polkit/docs/latest/ref-api.html.
openSUSE Leap ships with two sets of default authorizations, located
in /etc/polkit-default-privs.standard and
/etc/polkit-default-privs.restrictive. For more
information, refer to
Section 9.2.3, “Default Privileges”.
Custom privileges are defined in
/etc/polkit-default-privs.local. Privileges defined
here will always take precedence over the ones defined in the other
configuration files. To define your custom set of privileges,
do the following:
Open /etc/polkit-default-privs.local. To define a
privilege, add a line for each policy with the following format:
<privilege_identifier> <any session>:<inactive session>:<active session>
For example:
org.freedesktop.policykit.modify-defaults auth_admin_keep_always
The following values are valid for the SESSION placeholders:
yes
grant privilege
no
block
auth_self
user needs to authenticate with own password every time the privilege is requested
auth_self_keep_session
user needs to authenticate with own password once per session, privilege is granted for the whole session
auth_self_keep_always
user needs to authenticate with own password once, privilege is granted for the current and for future sessions
auth_admin
user needs to authenticate with root password every time
the privilege is requested
auth_admin_keep_session
user needs to authenticate with root password once per
session, privilege is granted for the whole session
auth_admin_keep_always
user needs to authenticate with root password once,
privilege is granted for the current and for future sessions
Run as root for changes to take effect:
# /sbin/set_polkit_default_privs
Optionally check the list of all privilege identifiers with the
command pkaction.
openSUSE Leap comes with a predefined set of privileges that is activated by default and thus overrides the upstream defaults. For details, refer to Section 9.2.3, “Default Privileges”.
Since the graphical PolKit tools and the command line tools always
operate on the upstream defaults, openSUSE Leap includes an additional
command-line tool, set_polkit_default_privs. It resets
privileges to the values defined in
/etc/polkit-default-privs.*. However, the command
set_polkit_default_privs will only reset policies that
are set to the upstream defaults.
Make sure /etc/polkit-default-privs.local does not
contain any overrides of the default policies.
Policies defined in
/etc/polkit-default-privs.local will be applied
on top of the defaults during the next step.
To reset all policies to the upstream defaults first and then apply the openSUSE Leap defaults:
tux >sudorm -f /var/lib/polkit/* && set_polkit_default_privs
POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more flexibly than with the traditional permission concept.
The term POSIX ACL suggests that this is a true POSIX (portable operating system interface) standard. The respective draft standards POSIX 1003.1e and POSIX 1003.2c have been withdrawn for several reasons. Nevertheless, ACLs (as found on many systems belonging to the Unix family) are based on these drafts and the implementation of file system ACLs (as described in this chapter) follows these two standards.
Find detailed information about the traditional file permissions in the
GNU Coreutils Info page, Node File permissions
(info coreutils "File permissions"). More advanced
features are the setuid, setgid, and sticky bit.
In certain situations, the access permissions may be too restrictive.
Therefore, Linux has additional settings that enable the temporary
change of the current user and group identity for a specific action. For
example, the passwd program normally requires root
permissions to access /etc/passwd. This file
contains some important information, like the home directories of users
and user and group IDs. Thus, a normal user would not be able to change
passwd, because it would be too dangerous to grant
all users direct access to this file. A possible solution to this
problem is the setuid mechanism. setuid (set user
ID) is a special file attribute that instructs the system to execute
programs marked accordingly under a specific user ID. Consider the
passwd command:
-rwsr-xr-x 1 root shadow 80036 2004-10-02 11:08 /usr/bin/passwd
You can see the s that denotes that the setuid bit is
set for the user permission. By means of the setuid bit, all users
starting the passwd command execute it as
root.
The setuid bit applies to users. However, there is also an equivalent property for groups: the setgid bit. A program for which this bit was set runs under the group ID under which it was saved, no matter which user starts it. Therefore, in a directory with the setgid bit, all newly created files and subdirectories are assigned to the group to which the directory belongs. Consider the following example directory:
drwxrws--- 2 tux archive 48 Nov 19 17:12 backup
You can see the s that denotes that the setgid bit is
set for the group permission. The owner of the directory and members of
the group archive may access
this directory. Users that are not members of this group are
“mapped” to the respective group. The effective group ID of
all written files will be
archive. For example, a
backup program that runs with the group ID
archive can access
this directory even without root privileges.
There is also the sticky bit. It makes a difference
whether it belongs to an executable program or a directory. If it
belongs to a program, a file marked in this way is loaded to RAM to
avoid needing to get it from the hard disk each time it is used. This
attribute is used rarely, because modern hard disks are fast enough. If
this bit is assigned to a directory, it prevents users from deleting
each other's files. Typical examples include the
/tmp and /var/tmp directories:
drwxrwxrwt 2 root root 1160 2002-11-19 17:15 /tmp
Traditionally, three permission sets are defined for each file object on
a Linux system. These sets include the read (r), write
(w), and execute (x) permissions
for each of three types of users—the file owner, the group, and
other users. In addition to that, it is possible to set the set
user id, the set group id, and the
sticky bit. This lean concept is fully adequate for
most practical cases. However, for more complex scenarios or advanced
applications, system administrators formerly needed to use several
workarounds to circumvent the limitations of the traditional permission
concept.
ACLs can be used as an extension of the traditional file permission concept. They allow the assignment of permissions to individual users or groups even if these do not correspond to the original owner or the owning group. Access control lists are a feature of the Linux kernel and are currently supported by Ext2, Ext3, Ext4, JFS, and XFS. Using ACLs, complex scenarios can be realized without implementing complex permission models on the application level.
The advantages of ACLs are evident if you want to replace a Windows
server with a Linux server. Some connected workstations may
continue to run under Windows even after the migration. The Linux system
offers file and print services to the Windows clients with Samba. With
Samba supporting access control lists, user permissions can be configured
both on the Linux server and in Windows with a graphical user interface
(only Windows NT and later). With winbindd, part of
the Samba suite, it is even possible to assign permissions to users only
existing in the Windows domain without any account on the Linux server.
The conventional POSIX permission concept uses three
classes of users for assigning permissions in the
file system: the owner, the owning group, and other users. Three
permission bits can be set for each user class, giving permission to
read (r), write (w), and execute
(x).
The user and group access permissions for all kinds of file system objects (files and directories) are determined by means of ACLs.
Default ACLs can only be applied to directories. They determine the permissions a file system object inherits from its parent directory when it is created.
Each ACL consists of a set of ACL entries. An ACL entry contains a type, a qualifier for the user or group to which the entry refers, and a set of permissions. For some entry types, the qualifier for the group or users is undefined.
Table 10.1, “ACL Entry Types” summarizes the six possible types of ACL
entries, each defining permissions for a user or a group of users. The
owner entry defines the permissions of the user
owning the file or directory. The owning group entry
defines the permissions of the file's owning group. The superuser can
change the owner or owning group with chown or
chgrp, in which case the owner and owning group
entries refer to the new owner and owning group. Each named
user entry defines the permissions of the user specified in
the entry's qualifier field. Each named group entry
defines the permissions of the group specified in the entry's qualifier
field. Only the named user and named group entries have a qualifier field
that is not empty. The other entry defines the
permissions of all other users.
The mask entry further limits the permissions granted by named user, named group, and owning group entries by defining which of the permissions in those entries are effective and which are masked. If permissions exist in one of the mentioned entries and in the mask, they are effective. Permissions contained only in the mask or only in the actual entry are not effective—meaning the permissions are not granted. All permissions defined in the owner and owning group entries are always effective. The example in Table 10.2, “Masking Access Permissions” demonstrates this mechanism.
There are two basic classes of ACLs: A minimum ACL contains only the entries for the types owner, owning group, and other, which correspond to the conventional permission bits for files and directories. An extended ACL goes beyond this. It must contain a mask entry and may contain several entries of the named user and named group types.
|
Type |
Text Form |
|---|---|
|
owner |
|
|
named user |
|
|
owning group |
|
|
named group |
|
|
mask |
|
|
other |
|
|
Entry Type |
Text Form |
Permissions |
|---|---|---|
|
named user |
|
|
|
mask |
|
|
|
effective permissions: |
|
Figure 10.1, “Minimum ACL: ACL Entries Compared to Permission Bits” and
Figure 10.2, “Extended ACL: ACL Entries Compared to Permission Bits” illustrate the two cases of a minimum
ACL and an extended ACL. The figures are structured in three
blocks—the left block shows the type specifications of the ACL
entries, the center block displays an example ACL, and the right block
shows the respective permission bits according to the conventional
permission concept (for example, as displayed by ls
-l). In both cases, the owner
class permissions are mapped to the ACL entry owner.
Other class permissions are mapped to the
respective ACL entry. However, the mapping of the group
class permissions is different in the two cases.
In the case of a minimum ACL—without mask—the group class permissions are mapped to the ACL entry owning group. This is shown in Figure 10.1, “Minimum ACL: ACL Entries Compared to Permission Bits”. In the case of an extended ACL—with mask—the group class permissions are mapped to the mask entry. This is shown in Figure 10.2, “Extended ACL: ACL Entries Compared to Permission Bits”.
This mapping approach ensures the smooth interaction of applications, regardless of whether they have ACL support. The access permissions that were assigned by means of the permission bits represent the upper limit for all other “fine adjustments” made with an ACL. Changes made to the permission bits are reflected by the ACL and vice versa.
With getfacl and setfacl on the
command line, you can access ACLs. The usage of these commands is
demonstrated in the following example.
Before creating the directory, use the umask command
to define which access permissions should be masked each time a file
object is created. The command umask
027 sets the default permissions by giving the owner
the full range of permissions (0), denying the group
write access (2), and giving other users no
permissions (7). umask
actually masks the corresponding permission bits or turns them off. For
details, consult the umask man page.
mkdir mydir creates the mydir
directory with the default permissions as set by
umask. Use ls -dl
mydir to check whether all permissions were assigned correctly.
The output for this example is:
drwxr-x--- ... tux project3 ... mydir
With getfacl mydir, check the
initial state of the ACL. This gives information like:
# file: mydir # owner: tux # group: project3 user::rwx group::r-x other::---
The first three output lines display the name, owner, and
owning group of the directory. The next three lines contain the three
ACL entries owner, owning group, and other. In fact, in the case of this
minimum ACL, the getfacl command does not produce any
information you could not have obtained with ls.
Modify the ACL to assign read, write, and execute permissions to an
additional user geeko and an additional group
mascots with:
root # setfacl -m user:geeko:rwx,group:mascots:rwx mydir
The option -m prompts setfacl to
modify the existing ACL. The following argument indicates the ACL
entries to modify (multiple entries are separated by commas). The final
part specifies the name of the directory to which these modifications
should be applied. Use the getfacl command to take a
look at the resulting ACL.
# file: mydir # owner: tux # group: project3 user::rwx user:geeko:rwx group::r-x group:mascots:rwx mask::rwx other::---
In addition to the entries initiated for the user
geeko and the group mascots, a
mask entry has been generated. This mask entry is set automatically so
that all permissions are effective. setfacl
automatically adapts existing mask entries to the settings modified,
unless you deactivate this feature with -n. The mask
entry defines the maximum effective access permissions for all entries
in the group class. This includes named user, named group, and owning
group. The group class permission bits displayed by
ls -dl mydir now correspond to the
mask entry.
drwxrwx---+ ... tux project3 ... mydir
The first column of the output contains an additional
+ to indicate that there is an
extended ACL for this item.
According to the output of the ls command, the
permissions for the mask entry include write access. Traditionally, such
permission bits would mean that the owning group (here
project3) also has write access to the directory
mydir.
However, the effective access permissions for the owning group
correspond to the overlapping portion of the permissions defined for the
owning group and for the mask—which is r-x
in our example (see Table 10.2, “Masking Access Permissions”). As far as the effective
permissions of the owning group in this example are concerned, nothing
has changed even after the addition of the ACL entries.
Edit the mask entry with setfacl or
chmod. For example, use chmod
g-w mydir. ls -dl
mydir then shows:
drwxr-x---+ ... tux project3 ... mydir
getfacl mydir provides the following
output:
# file: mydir # owner: tux # group: project3 user::rwx user:geeko:rwx # effective: r-x group::r-x group:mascots:rwx # effective: r-x mask::r-x other::---
After executing chmod to remove the write
permission from the group class bits, the output of
ls is sufficient to see that the mask bits
must have changed accordingly: write permission is again limited to the
owner of mydir. The output of the
getfacl confirms this. This output includes a comment
for all those entries in which the effective permission bits do not
correspond to the original permissions, because they are filtered
according to the mask entry. The original permissions can be restored at
any time with chmod g+w mydir.
Directories can have a default ACL, which is a special kind of ACL defining the access permissions that objects in the directory inherit when they are created. A default ACL affects both subdirectories and files.
There are two ways in which the permissions of a directory's default ACL are passed to the files and subdirectories:
A subdirectory inherits the default ACL of the parent directory both as its default ACL and as an ACL.
A file inherits the default ACL as its ACL.
All system calls that create file system objects use a
mode parameter that defines the access permissions
for the newly created file system object. If the parent directory does
not have a default ACL, the permission bits as defined by the
umask are subtracted from the permissions as passed
by the mode parameter, with the result being
assigned to the new object. If a default ACL exists for the parent
directory, the permission bits assigned to the new object correspond to
the overlapping portion of the permissions of the
mode parameter and those that are defined in the
default ACL. The umask is disregarded in this case.
The following three examples show the main operations for directories and default ACLs:
Add a default ACL to the existing directory
mydir with:
tux > setfacl -d -m group:mascots:r-x mydir
The option -d of the setfacl
command prompts setfacl to perform the following
modifications (option -m) in the default ACL.
Take a closer look at the result of this command:
tux > getfacl mydir
# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx
group::r-x
group:mascots:rwx
mask::rwx
other::---
default:user::rwx
default:group::r-x
default:group:mascots:r-x
default:mask::r-x
default:other::---
getfacl returns both the ACL and the default ACL.
The default ACL is formed by all lines that start with
default. Although you merely executed the
setfacl command with an entry for the
mascots group for the default ACL,
setfacl automatically copied all other entries
from the ACL to create a valid default ACL. Default ACLs do not have
an immediate effect on access permissions. They only come into play
when file system objects are created. These new objects inherit
permissions only from the default ACL of their parent directory.
In the next example, use mkdir to create a
subdirectory in mydir, which inherits the
default ACL.
tux > mkdir mydir/mysubdir
getfacl mydir/mysubdir
# file: mydir/mysubdir
# owner: tux
# group: project3
user::rwx
group::r-x
group:mascots:r-x
mask::r-x
other::---
default:user::rwx
default:group::r-x
default:group:mascots:r-x
default:mask::r-x
default:other::---
As expected, the newly-created subdirectory
mysubdir has the permissions from the default
ACL of the parent directory. The ACL of mysubdir
is an exact reflection of the default ACL of
mydir. The default ACL that this directory will
hand down to its subordinate objects is also the same.
Use touch to create a file in the
mydir directory, for example,
touch mydir/myfile.
ls -l mydir/myfile then shows:
-rw-r-----+ ... tux project3 ... mydir/myfile
The output of getfacl
mydir/myfile is:
# file: mydir/myfile # owner: tux # group: project3 user::rw- group::r-x # effective:r-- group:mascots:r-x # effective:r-- mask::r-- other::---
touch uses a mode with the
value 0666 when creating new files, which means
that the files are created with read and write permissions for all
user classes, provided no other restrictions exist in
umask or in the default ACL (see
Section 10.4.3.1, “Effects of a Default ACL”). In effect,
this means that all access permissions not contained in the
mode value are removed from the respective ACL
entries. Although no permissions were removed from the ACL entry of
the group class, the mask entry was modified to mask permissions not
set in mode.
This approach ensures the smooth interaction of applications (such as
compilers) with ACLs. You can create files with restricted access
permissions and subsequently mark them as executable. The
mask mechanism guarantees that the right users and
groups can execute them as desired.
A check algorithm is applied before any process or application is granted access to an ACL-protected file system object. As a basic rule, the ACL entries are examined in the following sequence: owner, named user, owning group or named group, and other. The access is handled in accordance with the entry that best suits the process. Permissions do not accumulate.
Things are more complicated if a process belongs to more than one group and would potentially suit several group entries. An entry is randomly selected from the suitable entries with the required permissions. It is irrelevant which of the entries triggers the final result “access granted”. Likewise, if none of the suitable group entries contain the required permissions, a randomly selected entry triggers the final result “access denied”.
ACLs can be used to implement very complex permission scenarios that meet
the requirements of modern applications. The traditional permission
concept and ACLs can be combined in a smart manner. The basic file
commands (cp, mv,
ls, etc.) support ACLs, as do Samba and Nautilus.
Unfortunately, many editors and file managers still lack ACL support.
When copying files with Emacs, for example, the ACLs of these files are
lost.
When modifying files with an editor, the ACLs of files are sometimes
preserved and sometimes not, depending on the backup mode of the editor
used. If the editor writes the changes to the original file, the ACL is
preserved. If the editor saves the updated contents to a new file that is
subsequently renamed to the old file name, the ACLs may be lost, unless
the editor supports ACLs. Except for the star
archiver, there are currently no backup applications that preserve ACLs.
For more information about ACLs, see the man pages for
getfacl(1), acl(5), and
setfacl(1).
Encrypting files, partitions, and entire disks prevents unauthorized access to your data and protects your confidential files and documents.
You can choose between the following encryption options:
It is possible to create an encrypted partition with YaST during installation or in an already installed system. For further info, see Section 11.1.1, “Creating an Encrypted Partition during Installation” and Section 11.1.2, “Creating an Encrypted Partition on a Running System”. This option can also be used for removable media, such as external hard disks, as described in Section 11.1.4, “Encrypting the Content of Removable Media”.
You can create a file-based encrypted virtual disk on your hard disk or a removable medium with YaST. The encrypted virtual disk can then be used as a regular folder for storing files or directories. For more information, refer to Section 11.1.3, “Creating an Encrypted Virtual Disk”.
To quickly encrypt one or several files, you can use the GPG tool. See Section 11.2, “Encrypting Files with GPG” for more information.
Encryption methods described in this chapter cannot protect your running system from being compromised. After the encrypted volume is successfully mounted, everybody with appropriate permissions can access it. However, encrypted media are useful in case of loss or theft of your computer, or to prevent unauthorized individuals from reading your confidential data.
Use YaST to encrypt partitions or parts of your file system during installation or in an already installed system. However, encrypting a partition in an already-installed system is more difficult, because you need to resize and change existing partitions. In such cases, it may be more convenient to create an encrypted file of a defined size, in which to store other files or parts of your file system. To encrypt an entire partition, dedicate a partition for encryption in the partition layout. The standard partitioning proposal as suggested by YaST, does not include an encrypted partition by default. Add it manually in the partitioning dialog.
Make sure to memorize the password for your encrypted partitions well. Without that password, you cannot access or restore the encrypted data.
The YaST expert dialog for partitioning offers the options needed for creating an encrypted partition. To create a new encrypted partition proceed as follows:
Run the YaST Expert Partitioner with › .
Select a hard disk, click , and select a primary or an extended partition.
Select the partition size or the region to use on the disk.
Select the file system, and mount point of this partition.
Activate the check box.
After checking , a pop-up window asking for installing additional software may appear. Confirm to install all the required packages to ensure that the encrypted partition works well.
If the encrypted file system needs to be mounted only when necessary, enable in the . otherwise enable and enter the mount point.
Click and enter a password which is used to encrypt this partition. This password is not displayed. To prevent typing errors, you need to enter the password twice.
Complete the process by clicking . The newly-encrypted partition is now created.
During the boot process, the operating system asks for the password
before mounting any encrypted partition which is set to be auto-mounted
in /etc/fstab. Such a partition is then available
to all users when it has been mounted.
To skip mounting the encrypted partition during start-up, press Enter when prompted for the password. Then decline the offer to enter the password again. In this case, the encrypted file system is not mounted and the operating system continues booting, blocking access to your data.
To mount an encrypted partition which is not mounted during the boot process, open a file manager and click the partition entry in the pane listing common places on your file system. You will be prompted for a password and the partition will be mounted.
When you are installing your system on a machine where partitions already exist, you can also decide to encrypt an existing partition during installation. In this case follow the description in Section 11.1.2, “Creating an Encrypted Partition on a Running System” and be aware that this action destroys all data on the existing partition.
It is also possible to create encrypted partitions on a running system. However, encrypting an existing partition destroys all data on it, and requires re-sizing and restructuring of existing partitions.
On a running system, select › in the YaST control center. Click to proceed. In the , select the partition to encrypt and click . The rest of the procedure is the same as described in Section 11.1.1, “Creating an Encrypted Partition during Installation”.
Instead of encrypting an entire disk or partition, you can use YaST to set up a file-based encrypted virtual disk. It will appear as a regular file in the file system, but can be mounted and used like a regular folder. Unlike encrypted partitions, encrypted virtual disks can be created without re-partitioning the hard disk.
To set up an encrypted virtual disk, you need to create an empty file
first (this file is called loop file). In the terminal, switch to the
desired directory and run the touch
FILE command (where
FILE is the desired name, for example: secret). It is also recommended to create an empty
directory that will act as a mount point for the encrypted virtual
disk. To do this, use the mkdir
DIR command (replace
DIR with the actual path and directory name,
for example: ~/my_docs).
To set up an encrypted virtual disk, launch YaST, switch to the
section, and start Partitioner. Switch to
the section and press . Enter the path to the created loop file into the
field. Enable the
option, specify the desired size, and
press . In the field,
enter the path to the directory that serves as a mount point (in this
example, it is ~/my_docs). Make sure that the
option is enabled and press
. Provide the desired password and press
.
YaST treats removable media (like external hard disks or flash disks) the same as any other storage device. Virtual disks or partitions on external media can be encrypted as described above. However, you should disable mounting at boot time, because removable media is usually connected only when the system is up and running.
If you encrypted your removable device with YaST, the GNOME desktop
automatically recognizes the encrypted partition and prompts for the
password when the device is detected. If you plug in a FAT-formatted
removable device when running GNOME, the desktop user entering the
password automatically becomes the owner of the device.
For devices with a file system other than FAT, change the
ownership explicitly for users other than root to give them
read-write access to the device.
If you have created a virtual disk as described in Section 11.1.3, “Creating an Encrypted Virtual Disk” but with the loop file on a removable disk, then you need to mount the file manually as follows:
tux >sudocryptsetup luksOpen FILE NAME sudo mount /dev/mapper/NAME DIR
In the commands above, the FILE refers to the path to the loop file, NAME is a user-defined name, and DIR is the path to the mount point. For example:
tux >sudocryptsetup luksOpen /run/media/tux/usbstick/secret my_secrettux >sudomount /dev/mapper/my_secret /home/tux/my_docs
The GPG encryption software can be used to encrypt individual files and documents.
To encrypt a file with GPG, you need to generate a key pair first. To do
this, run the gpg --gen-key and follow the on-screen
instructions. When generating the key pair, GPG creates a user ID (UID) to
identify the key based on your real name, comments, and email address. You
need this UID (or just a part of it like your first name or email address)
to specify the key you want to use to encrypt a file. To find the UID of an
existing key, use the gpg --list-keys command. To encrypt
a file use the following command:
tux > gpg -e -r UID
FILEReplace UID with part of the UID (for example, your first name) and FILE with the file you want to encrypt. For example:
tux > gpg -e -r Tux secret.txt
This command creates an encrypted version of the specified file
recognizable by the .gpg file extension (in
this example, it is secret.txt.gpg).
To decrypt an encrypted file, use the following command:
tux > gpg -d -o DECRYPTED_FILE
ENCRYPTED_FILEReplace DECRYPTED_FILE with the desired name for the decrypted file and ENCRYPTED_FILE with the encrypted file you want to decrypt.
Keep that the encrypted file can be only decrypted using the same key that was used for encryption. If you want to share an encrypted file with another person, you have to use that person's public key to encrypt the file.
Certificates play an important role in the authentication of companies and individuals. Usually certificates are administered by the application itself. In some cases, it makes sense to share certificates between applications. The certificate store is a common ground for Firefox, Evolution, and NetworkManager. This chapter explains some details.
The certificate store is a common database for Firefox, Evolution, and NetworkManager at the moment. Other applications that use certificates are not covered but may be in the future. If you have such an application, you can continue to use its private, separate configuration.
The configuration is mostly done in the background. To activate it, proceed as follows:
Decide if you want to activate the certificate store globally (for every user on your system) or specifically to a certain user:
For every user.
Use the file /etc/profile.local
For a specific user.
Use the file ~/.bashrc
Open the file from the previous step and insert the following line:
export NSS_USE_SHARED_DB=1
Save the file
Log out of and log in to your desktop.
All the certificates are stored under
$HOME/.local/var/pki/nssdb/.
To import a certificate into the certificate store, do the following:
Start Firefox.
Open the dialog from › . Change to › and click .
Import your certificate depending on your type: use to import server certificate, to identify other, and to identify yourself.
Securing your systems is a mandatory task for any mission-critical
system administrator. Because it is impossible to always guarantee that
the system is not compromised, it is very important to do extra checks
regularly (for example with
cron) to ensure that the system
is still under your control. This is where AIDE, the
Advanced Intrusion Detection Environment, comes
into play.
An easy check that often can reveal unwanted changes can be done by means
of RPM. The package manager has a built-in verify function that checks
all the managed files in the system for changes. To verify of all files,
run the command rpm -Va. However, this command will
also display changes in configuration files and you will need to do some
filtering to detect important changes.
An additional problem to the method with RPM is that an intelligent
attacker will modify rpm itself to hide any changes
that might have been done by some kind of root-kit which allows the
attacker to mask its intrusion and gain root privilege. To solve this,
you should implement a secondary check that can also be run completely
independent of the installed system.
Before you install your system, verify the checksum of your medium (see Section 4.1, “Checking Media”) to make sure you do not use a compromised source. After you have installed the system, initialize the AIDE database. To make sure that all went well during and after the installation, do an installation directly on the console, without any network attached to the computer. Do not leave the computer unattended or connected to any network before AIDE creates its database.
AIDE is not installed by default on openSUSE Leap. To install it,
either use › , or enter zypper install
aide on the command line as root.
To tell AIDE which attributes of which files should be checked, use
the /etc/aide.conf configuration file. It must be
modified to become the actual configuration. The first section handles
general parameters like the location of the AIDE database file. More
relevant for local configurations are the Custom
Rules and the Directories and Files
sections. A typical rule looks like the following:
Binlib = p+i+n+u+g+s+b+m+c+md5+sha1
After defining the variable Binlib, the respective
check boxes are used in the files section. Important options include the
following:
|
Option |
Description |
|---|---|
|
p |
Check for the file permissions of the selected files or directories. |
|
i |
Check for the inode number. Every file name has a unique inode number that should not change. |
|
n |
Check for the number of links pointing to the relevant file. |
|
u |
Check if the owner of the file has changed. |
|
g |
Check if the group of the file has changed. |
|
s |
Check if the file size has changed. |
|
b |
Check if the block count used by the file has changed. |
|
m |
Check if the modification time of the file has changed. |
|
c |
Check if the files access time has changed. |
|
md5 |
Check if the md5 checksum of the file has changed. |
|
sha1 |
Check if the sha1 (160 Bit) checksum of the file has changed. |
This is a configuration that checks for all files in
/sbin with the options defined in
Binlib but omits the
/sbin/conf.d/ directory:
/sbin Binlib !/sbin/conf.d
To create the AIDE database, proceed as follows:
Open /etc/aide.conf.
Define which files should be checked with which check boxes. For a
complete list of available check boxes, see
/usr/share/doc/packages/aide/manual.html. The
definition of the file selection needs some knowledge about regular
expressions. Save your modifications.
To check whether the configuration file is valid, run:
root # aide --config-checkAny output of this command is a hint that the configuration is not valid. For example, if you get the following output:
root # aide --config-check
35:syntax error:!
35:Error while reading configuration:!
Configuration error
The error is to be expected in line 36 of
/etc/aide.conf. Note that the error message
contains the last successfully read line of the configuration file.
Initialize the AIDE database. Run the command:
root # aide -iCopy the generated database to a save location like a CD-R or DVD-R, a remote server or a flash disk for later use.
This step is essential as it avoids compromising your database. It is recommended to use a medium which can be written only once to prevent the database being modified. Never leave the database on the computer which you want to monitor.
To perform a file system check, proceed as follows:
Rename the database:
root # mv /var/lib/aide/aide.db.new /var/lib/aide/aide.dbAfter any configuration change, you always need to re-initialize the AIDE database and subsequently move the newly generated database. It is also a good idea to make a backup of this database. See Section 13.2, “Setting Up an AIDE Database” for more information.
Perform the check with the following command:
root # aide --checkIf the output is empty, everything is fine. If AIDE found changes, it displays a summary of changes, for example:
root # aide --check
AIDE found differences between database and filesystem!!
Summary:
Total number of files: 1992
Added files: 0
Removed files: 0
Changed files: 1
To learn about the actual changes, increase the verbose level of the
check with the parameter -V. For the previous example,
this could look like the following:
root # aide --check -V
AIDE found differences between database and filesystem!!
Start timestamp: 2009-02-18 15:14:10
Summary:
Total number of files: 1992
Added files: 0
Removed files: 0
Changed files: 1
---------------------------------------------------
Changed files:
---------------------------------------------------
changed: /etc/passwd
--------------------------------------------------
Detailed information about changes:
---------------------------------------------------
File: /etc/passwd
Mtime : 2009-02-18 15:11:02 , 2009-02-18 15:11:47
Ctime : 2009-02-18 15:11:02 , 2009-02-18 15:11:47
In this example, the file /etc/passwd was touched to
demonstrate the effect.
To avoid risk, it is advisable to also run the AIDE binary from a trusted source. This excludes the risk that some attacker also modified the aide binary to hide its traces.
To accomplish this task, AIDE must be run from a rescue system that is independent of the installed system. With openSUSE Leap it is relatively easy to extend the rescue system with arbitrary programs, and thus add the needed functionality.
Before you can start using the rescue system, you need to provide two packages to the system. These are included with the same syntax as you would add a driver update disk to the system. For a detailed description about the possibilities of linuxrc that are used for this purpose, see http://en.opensuse.org/SDB:Linuxrc. In the following, one possible way to accomplish this task is discussed.
Provide an FTP server as a second machine.
Copy the packages aide and
mhash to the FTP server directory, in our case
/srv/ftp/. Replace the placeholders
ARCH and VERSION
with the corresponding values:
root #cp DVD1/suse/ARCH/aideVERSION.ARCH.rpm /srv/ftproot #cp DVD1/suse/ARCH/mhashVERSION.ARCH.rpm /srv/ftp
Create an info file /srv/ftp/info.txt that
provides the needed boot parameters for the rescue system:
dud:ftp://ftp.example.com/aideVERSION.ARCH.rpm dud:ftp://ftp.example.com/mhashVERSION.ARCH.rpm
Replace your FTP domain name, VERSION and ARCH with the values used on your system.
Restart the server that needs to go through an AIDE check with the Rescue system from your DVD. Add the following string to the boot parameters:
info=ftp://ftp.example.com/info.txt
This parameter tells linuxrc to also read in all
information from the info.txt file.
After the rescue system has booted, the AIDE program is ready for use.
Information about AIDE is available at the following places:
The home page of AIDE: http://aide.sourceforge.net
In the documented template configuration
/etc/aide.conf.
In several files below
/usr/share/doc/packages/aide after installing the
aide package.
On the AIDE user mailing list at https://mailman.cs.tut.fi/mailman/listinfo/aide.
In networked environments, it is often necessary to access hosts from a
remote location. If a user sends login and password strings for
authentication purposes as plain text, they could be intercepted and
misused to gain access to that user account. This would open all the user's files to an attacker
and the illegal account could be used to obtain administrator or
root access, or to penetrate
other systems. In the past, remote connections were established with
telnet, rsh or
rlogin, which offered no guards against eavesdropping
in the form of encryption or other security mechanisms. There are other
unprotected communication channels, like the traditional FTP protocol
and some remote copying programs like rcp.
Whenever Linux is used in a network environment, you can use the kernel functions that allow the manipulation of network packets to maintain a separation between internal and external network areas. The Linux netfilter framework provides the means to establish an effective firewall that keeps differ…
Today, Internet connections are cheap and available almost everywhere. However, not all connections are secure. Using a Virtual Private Network (VPN), you can create a secure network within an insecure network such as the Internet or Wi-Fi. It can be implemented in different ways and serves several purposes. In this chapter, we focus on the OpenVPN implementation to link branch offices via secure wide area networks (WANs).
An increasing number of authentication mechanisms are based on cryptographic procedures. Digital certificates that assign cryptographic keys to their owners play an important role in this context. These certificates are used for communication and can also be found, for example, on company ID cards. The generation and administration of certificates is mostly handled by official institutions that offer this as a commercial service. In some cases, however, it may make sense to carry out these tasks yourself. For example, if a company does not want to pass personal data to third parties.
YaST provides two modules for certification, which offer basic management functions for digital X.509 certificates. The following sections explain the basics of digital certification and how to use YaST to create and administer certificates of this type.
In networked environments, it is often necessary to access hosts from a
remote location. If a user sends login and password strings for
authentication purposes as plain text, they could be intercepted and
misused to gain access to that user account. This would open all the user's files to an attacker
and the illegal account could be used to obtain administrator or
root access, or to penetrate
other systems. In the past, remote connections were established with
telnet, rsh or
rlogin, which offered no guards against eavesdropping
in the form of encryption or other security mechanisms. There are other
unprotected communication channels, like the traditional FTP protocol
and some remote copying programs like rcp.
The SSH suite provides the necessary protection by encrypting the authentication strings (usually a login name and a password) and all the other data exchanged between the hosts. With SSH, the data flow could still be recorded by a third party, but the contents are encrypted and cannot be reverted to plain text unless the encryption key is known. So SSH enables secure communication over insecure networks, such as the Internet. The SSH implementation coming with openSUSE Leap is OpenSSH.
openSUSE Leap installs the OpenSSH package by default providing the
commands ssh, scp, and
sftp. In the default configuration, remote access of a
openSUSE Leap system is only possible with the OpenSSH utilities, and
only if the sshd is running and
the firewall permits access.
SSH on openSUSE Leap uses cryptographic hardware acceleration if available. As a result, the transfer of large quantities of data through an SSH connection is considerably faster than without cryptographic hardware. As an additional benefit, the CPU will see a significant reduction in load.
ssh—Secure Shell #
With ssh it is possible to log in to remote
systems and to work interactively. To log in to the host
sun as user tux enter one of
the following commands:
tux >ssh tux@suntux >ssh -l tux sun
If the user name is the same on both machines, you can omit it. Using
ssh sun is sufficient. The remote host
prompts for the remote user's password. After a successful
authentication, you can work on the remote command line or use
interactive applications, such as YaST in text mode.
Furthermore, ssh offers the possibility to run
non-interactive commands on remote systems using ssh
HOST COMMAND.
COMMAND needs to be properly quoted. Multiple
commands can be concatenated as on a local shell.
tux >ssh root@sun "dmesg -T | tail -n 25"tux >ssh root@sun "cat /etc/issue && uptime"
SSH also simplifies the use of remote X applications. If you run
ssh with the -X option, the
DISPLAY variable is automatically set on the remote
machine and all X output is exported to the local machine over the
existing SSH connection. At the same time, X applications started
remotely cannot be intercepted by unauthorized individuals.
By adding the -A option, the ssh-agent authentication
mechanism is carried over to the next machine. This way, you can work
from different machines without having to enter a password, but only if
you have distributed your public key to the destination hosts and
properly saved it there. Refer to
Section 14.5.2, “Copying an SSH Key” for details.
This mechanism is deactivated in the default settings, but can be
permanently activated at any time in the systemwide configuration file
/etc/ssh/sshd_config by setting
AllowAgentForwarding yes.
scp—Secure Copy #
scp copies files to or from a remote machine. If
the user name on jupiter is different than the user name on
sun, specify the latter using the
USER_NAME@host format. If
the file should be copied into a directory other than the remote
user's home directory, specify it as
sun:DIRECTORY. The following
examples show how to copy a file from a local to a remote machine and
vice versa.
tux >scp ~/MyLetter.tex tux@sun:/tmp 1tux >scp tux@sun:/tmp/MyLetter.tex ~ 2
-l Option
With the ssh command, the option
-l can be used to specify a remote user (as an
alternative to the
USER_NAME@host
format). With scp the option -l
is used to limit the bandwidth consumed by scp.
After the correct password is entered, scp starts the
data transfer. It displays a progress bar and the time remaining for each
file that is copied. Suppress all output with the -q
option.
scp also provides a recursive copying feature for
entire directories. The command
tux > scp -r src/ sun:backup/
copies the entire contents of the directory src
including all subdirectories to the ~/backup
directory on the host sun. If this subdirectory does not
exist, it is created automatically.
The -p option tells scp to leave the
time stamp of files unchanged. -C compresses the data
transfer. This minimizes the data volume to transfer, but creates a
heavier burden on the processors of both machines.
sftp—Secure File Transfer #sftp #
If you want to copy several files from or to different locations,
sftp is a convenient alternative to
scp. It opens a shell with a set of commands similar
to a regular FTP shell. Type help at the sftp-prompt
to get a list of available commands. More details are available from the
sftp man page.
tux > sftp sun
Enter passphrase for key '/home/tux/.ssh/id_rsa':
Connected to sun.
sftp> help
Available commands:
bye Quit sftp
cd path Change remote directory to 'path'
[...]
As with a regular FTP server, a user cannot only download,
but also upload files to a remote machine running an SFTP server
by using the put command. By default the
files will be uploaded to the remote host with the same
permissions as on the local host. There are two options to
automatically alter these permissions:
A umask works as a filter against the permissions of the original file on the local host. It can only withdraw permissions:
|
permissions original |
umask |
permissions uploaded |
|---|---|---|
|
0666 |
0002 |
0664 |
|
0600 |
0002 |
0600 |
|
0775 |
0025 |
0750 |
To apply a umask on an SFTP server, edit the file
/etc/ssh/sshd_configuration. Search for the line
beginning with Subsystem sftp and add the
-u parameter with the desired setting, for example:
Subsystem sftp /usr/lib/ssh/sftp-server -u 0002
Explicitly setting the permissions sets the same permissions for all
files uploaded via SFTP. Specify a three-digit pattern such as
600, 644, or
755 with -u. When both
-m and -u are specified,
-u is ignored.
To apply explicit permissions for uploaded files on an SFTP server,
edit the file /etc/ssh/sshd_configuration.
Search for the line beginning with Subsystem sftp
and add the -m parameter with the desired setting,
for example:
Subsystem sftp /usr/lib/ssh/sftp-server -m 600
sshd) #
To work with the SSH client programs ssh and
scp, a server (the SSH daemon) must be running in the
background, listening for connections on TCP/IP port
22. The daemon generates three key pairs when starting for the
first time. Each key pair consists of a private and a public key.
Therefore, this procedure is called public key-based. To
guarantee the security of the communication via SSH, access to the
private key files must be restricted to the system administrator. The
file permissions are set accordingly by the default installation. The
private keys are only required locally by the SSH daemon and must not be
given to anyone else. The public key components (recognizable by the name
extension .pub) are sent to the client requesting
the connection. They are readable for all users.
A connection is initiated by the SSH client. The waiting SSH daemon and the requesting SSH client exchange identification data to compare the protocol and software versions, and to prevent connections through the wrong port. Because a child process of the original SSH daemon replies to the request, several SSH connections can be made simultaneously.
For the communication between SSH server and SSH client, OpenSSH supports
versions 1 and 2 of the SSH protocol. Version 2 of the
SSH protocol is used by default. Override this to use version 1
of protocol with the -1 option.
When using version 1 of SSH, the server sends its public host key and a server key, which is regenerated by the SSH daemon every hour. Both allow the SSH client to encrypt a freely chosen session key, which is sent to the SSH server. The SSH client also tells the server which encryption method (cipher) to use. Version 2 of the SSH protocol does not require a server key. Both sides use an algorithm according to Diffie-Hellman to exchange their keys.
The private host and server keys are absolutely required to decrypt the
session key and cannot be derived from the public parts. Only the
contacted SSH daemon can decrypt the session key using its private keys.
This initial connection phase can be watched closely by turning on
verbose debugging using the -v option of the SSH client.
To watch the log entries from the sshd use the following command:
tux >sudojournalctl -u sshd
It is recommended to back up the private and public keys stored in
/etc/ssh/ in a secure, external location. In this
way, key modifications can be detected or the old ones can be used again
after having installed a new system.
If you install openSUSE Leap on a machine with existing Linux installations, the installation routine automatically imports the SSH host key with the most recent access time from an existing installation.
When establishing a secure connection with a remote host for the first
time, the client stores all public host keys in
~/.ssh/known_hosts. This prevents any
man-in-the-middle attacks—attempts by foreign SSH servers to use
spoofed names and IP addresses. Such attacks are detected either by a
host key that is not included in ~/.ssh/known_hosts,
or by the server's inability to decrypt the session key in the absence of
an appropriate private counterpart.
If the public keys of a host have changed (that needs to be verified
before connecting to such a server), the offending keys can be
removed with ssh-keygen -r
HOSTNAME.
As of version 6.8, OpenSSH comes with a protocol extension that supports host key rotation. It makes sense to replace keys, if you are still using weak keys such as 1024-bit RSA keys. It is strongly recommended to replace such a key and go for 2048-bit DSA keys or something even better. The client will then use the “best” host key.
After installing new host keys on the server, restart sshd.
This protocol extension can
inform a client of all the new host keys on the server, if the user
initiates a connection with ssh. Then, the
software on the client updates
~/.ssh/known_hosts, and the user is not
required to accept new keys of previously known and trusted hosts
manually. The local known_hosts file will
contain all the host keys of the remote hosts, in addition to the
one that authenticated the host during this session.
Once the administrator of the server knows that all the clients have
fetched the new keys, they can remove the old keys. The protocol
extension ensures that the obsolete keys will be removed from the
client's configuration, too. The key removal occurs while initiating
an ssh session.
For more information, see:
http://blog.djm.net.au/2015/02/key-rotation-in-openssh-68.html
http://heise.de/-2540907 (“Endlich neue Schlüssel für SSH-Server”, German only)
In its simplest form, authentication is done by entering the user's
password just as if logging in locally. However, having to memorize
passwords of several users on remote machines is inefficient. What is
more, these passwords may change. On the other hand—when
granting root access—an administrator needs to be able
to quickly revoke such a permission without having to change the
root password.
To accomplish a login that does not require to enter the remote
user's password, SSH uses another key pair, which needs to be generated
by the user. It consists of a public (id_rsa.pub or
id_dsa.pub) and a private key
(id_rsa or id_dsa).
To be able to log in without having to specify the remote user's
password, the public key of the “SSH user” must be
in ~/.ssh/authorized_keys. This approach also
ensures that the remote user has got full control: adding the key
requires the remote user's password and removing the key revokes the
permission to log in from remote.
For maximum security such a key should be protected by a passphrase which
needs to be entered every time you use ssh,
scp, or sftp. Contrary to the
simple authentication, this passphrase is independent from the remote
user and therefore always the same.
An alternative to the key-based authentication described above, SSH also offers a host-based authentication. With host-based authentication, users on a trusted host can log in to another host on which this feature is enabled using the same user name. openSUSE Leap is set up for using key-based authentication, covering setting up host-based authentication on openSUSE Leap is beyond the scope of this manual.
If the host-based authentication is to be used, the file
/usr/lib/ssh/ssh-keysign (32-bit systems) or
/usr/lib64/ssh/ssh-keysign (64-bit systems) should
have the setuid bit set, which is not the default setting in
openSUSE Leap. In such case, set the file permissions manually. You
should use /etc/permissions.local for this purpose,
to make sure that the setuid bit is preserved after security updates of
openssh.
To generate a key with default parameters (RSA, 2048 bits), enter the
command ssh-keygen.
Accept the default location to store the key
(~/.ssh/id_rsa) by pressing
Enter (strongly recommended) or enter an
alternative location.
Enter a passphrase consisting of 10 to 30 characters. The same rules as for creating safe passwords apply. It is strongly advised to refrain from specifying no passphrase.
You should make absolutely sure that the private key is not accessible
by anyone other than yourself (always set its permissions to
0600). The private key must never fall into the hands
of another person.
To change the password of an existing key pair, use the command
ssh-keygen -p.
To copy a public SSH key to ~/.ssh/authorized_keys
of a user on a remote machine, use the command
ssh-copy-id. To copy your personal key
stored under ~/.ssh/id_rsa.pub you may use the
short form. To copy DSA keys or keys of other users, you need
to specify the path:
tux >~/.ssh/id_rsa.pubssh-copy-id -i tux@suntux >~/.ssh/id_dsa.pubssh-copy-id -i ~/.ssh/id_dsa.pub tux@suntux >~notme/.ssh/id_rsa.pubssh-copy-id -i ~notme/.ssh/id_rsa.pub tux@sun
To successfully copy the key, you need to enter the remote
user's password. To remove an existing key, manually edit
~/.ssh/authorized_keys.
ssh-agent #
When doing lots of secure shell operations it is cumbersome to type the
SSH passphrase for each such operation. Therefore, the SSH package
provides another tool, ssh-agent, which retains the
private keys for the duration of an X or terminal session. All other
windows or programs are started as clients to the
ssh-agent. By starting the agent, a set of
environment variables is set, which will be used by
ssh, scp, or
sftp to locate the agent for automatic login. See
the ssh-agent man page for details.
After the ssh-agent is started, you need to add your
keys by using ssh-add. It will prompt for the
passphrase. After the password has been provided once, you can use the
secure shell commands within the running session without having to
authenticate again.
ssh-agent in an X Session #
On openSUSE Leap, the ssh-agent is automatically
started by the GNOME display manager. To also invoke
ssh-add to add your keys to the agent at the
beginning of an X session, do the following:
Log in as the desired user and check whether the file
~/.xinitrc exists.
If it does not exist, use an existing template or copy it from
/etc/skel:
if [ -f ~/.xinitrc.template ]; then mv ~/.xinitrc.template ~/.xinitrc; \ else cp /etc/skel/.xinitrc.template ~/.xinitrc; fi
If you have copied the template, search for the following lines and
uncomment them. If ~/.xinitrc already existed,
add the following lines (without comment signs).
# if test -S "$SSH_AUTH_SOCK" -a -x "$SSH_ASKPASS"; then # ssh-add < /dev/null # fi
When starting a new X session, you will be prompted for your SSH passphrase.
ssh-agent in a Terminal Session #
In a terminal session you need to manually start the
ssh-agent and then call ssh-add
afterward. There are two ways to start the agent. The first example
given below starts a new Bash shell on top of your existing shell. The
second example starts the agent in the existing shell and modifies the
environment as needed.
tux > ssh-agent -s /bin/bash
eval $(ssh-agent)
After the agent has been started, run ssh-add to
provide the agent with your keys.
ssh can also be used to redirect TCP/IP connections.
This feature, also called SSH tunneling, redirects TCP
connections to a certain port to another machine via an encrypted
channel.
With the following command, any connection directed to jupiter port 25 (SMTP) is redirected to the SMTP port on sun. This is especially useful for those using SMTP servers without SMTP-AUTH or POP-before-SMTP features. From any arbitrary location connected to a network, e-mail can be transferred to the “home” mail server for delivery.
root # ssh -L 25:sun:25 jupiterSimilarly, all POP3 requests (port 110) on jupiter can be forwarded to the POP3 port of sun with this command:
root # ssh -L 110:sun:110 jupiter
Both commands must be executed as root, because the connection
is made to privileged local ports. E-mail is sent and retrieved by
normal users in an existing SSH connection. The SMTP and POP3 host
must be set to localhost for this to
work. Additional information can be found in the manual pages for
each of the programs described above and in the OpenSSH package
documentation under
/usr/share/doc/packages/openssh.
The home page of OpenSSH
The OpenSSH Wikibook
man sshd
The man page of the OpenSSH daemon
man ssh_config
The man page of the OpenSSH SSH client configuration files
man scp
, man sftp
, man slogin
, man ssh
, man ssh-add
, man ssh-agent
, man ssh-copy-id
, man ssh-keyconvert
, man ssh-keygen
, man ssh-keyscan
Man pages of several binary files to securely copy files
(scp, sftp), to log in
(slogin, ssh), and to manage
keys.
/usr/share/doc/packages/openssh/README.SUSE
,
/usr/share/doc/packages/openssh/README.FIPS
SUSE package specific documentation; changes in defaults with respect to upstream, notes on FIPS mode etc.
Whenever Linux is used in a network environment, you can use the
kernel functions that allow the manipulation of network packets to
maintain a separation between internal and external network areas. The
Linux netfilter framework provides the means
to establish an effective firewall that keeps different networks
apart. Using iptables—a generic table structure for the
definition of rule sets—precisely controls the packets allowed to
pass a network interface. Such a packet filter can be set up using
firewalld and its graphical interface firewall-config.
This section discusses the low-level details of packet filtering. The
components netfilter and
iptables are responsible for the filtering and
manipulation of network packets and for network address translation (NAT).
The filtering criteria and any actions associated with them are stored in
chains, which must be matched one after another by individual network
packets as they arrive. The chains to match are stored in tables. The
iptables command allows you to alter these tables and
rule sets.
The Linux kernel maintains three tables, each for a particular category of functions of the packet filter:
This table holds the bulk of the filter rules, because it implements
the packet filtering mechanism in the stricter
sense, which determines whether packets are let through
(ACCEPT) or discarded (DROP),
for example.
This table defines any changes to the source and target addresses of packets. Using these functions also allows you to implement masquerading, which is a special case of NAT used to link a private network with the Internet.
The rules held in this table make it possible to manipulate values stored in IP headers (such as the type of service).
These tables contain several predefined chains to match packets:
This chain is applied to all incoming packets.
This chain is applied to packets destined for the system's internal processes.
This chain is applied to packets that are only routed through the system.
This chain is applied to packets originating from the system itself.
This chain is applied to all outgoing packets.
Figure 15.1, “iptables: A Packet's Possible Paths” illustrates the paths along which a network packet may travel on a given system. For the sake of simplicity, the figure lists tables as parts of chains, but in reality these chains are held within the tables themselves.
In the simplest case, an incoming packet destined for the system itself
arrives at the eth0 interface. The packet is first
referred to the PREROUTING chain of the
mangle table then to the PREROUTING
chain of the nat table. The following step, concerning
the routing of the packet, determines that the actual target of the
packet is a process of the system itself. After passing the
INPUT chains of the mangle and the
filter table, the packet finally reaches its target,
provided that the rules of the filter table allow this.
Masquerading is the Linux-specific form of NAT (network address
translation) and can be used to connect a small LAN with the
Internet. LAN hosts use IP
addresses from the private range (see
Section 13.1.2, “Netmasks and Routing”) and on the Internet
official IP addresses are used. To be able to connect
to the Internet, a LAN host's private address is translated to an official
one. This is done on the router, which acts as the gateway between the
LAN and the Internet. The underlying principle is a simple one: The
router has more than one network interface, typically a network card and
a separate interface connecting with the Internet. While the latter links
the router with the outside world, one or several others link it with the
LAN hosts. With these hosts in the local network connected to the network
card (such as eth0) of the router, they can send any
packets not destined for the local network to their default gateway or
router.
When configuring your network, make sure both the broadcast address and the netmask are the same for all local hosts. Failing to do so prevents packets from being routed properly.
As mentioned, whenever one of the LAN hosts sends a packet destined for
an Internet address, it goes to the default router. However, the router
must be configured before it can forward such packets. For security
reasons, this is not enabled in a default installation. To enable it, add
the line net.ipv4.ip_forward = 1 in the file
/etc/sysctl.conf. Alternatively do this via YaST,
for example by calling yast routing ip-forwarding on.
The target host of the connection can see your router, but knows nothing about the host in your internal network where the packets originated. This is why the technique is called masquerading. Because of the address translation, the router is the first destination of any reply packets. The router must identify these incoming packets and translate their target addresses, so packets can be forwarded to the correct host in the local network.
With the routing of inbound traffic depending on the masquerading table, there is no way to open a connection to an internal host from the outside. For such a connection, there would be no entry in the table. In addition, any connection already established has a status entry assigned to it in the table, so the entry cannot be used by another connection.
As a consequence of all this, you might experience some problems with several application protocols, such as ICQ, cucme, IRC (DCC, CTCP), and FTP (in PORT mode). Web browsers, the standard FTP program, and many other programs use the PASV mode. This passive mode is much less problematic as far as packet filtering and masquerading are concerned.
Firewall is probably the term most widely used to describe a mechanism that controls the data flow between networks. Strictly speaking, the mechanism described in this section is called a packet filter. A packet filter regulates the data flow according to certain criteria, such as protocols, ports, and IP addresses. This allows you to block packets that, according to their addresses, are not supposed to reach your network. To allow public access to your Web server, for example, explicitly open the corresponding port. However, a packet filter does not scan the contents of packets with legitimate addresses, such as those directed to your Web server. For example, if incoming packets were intended to compromise a CGI program on your Web server, the packet filter would still let them through.
A more effective but more complex mechanism is the combination of several types of systems, such as a packet filter interacting with an application gateway or proxy. In this case, the packet filter rejects any packets destined for disabled ports. Only packets directed to the application gateway are accepted. This gateway or proxy pretends to be the actual client of the server. In a sense, such a proxy could be considered a masquerading host on the protocol level used by the application. One example for such a proxy is Squid, an HTTP and FTP proxy server. To use Squid, the browser must be configured to communicate via the proxy. Any HTTP pages or FTP files requested are served from the proxy cache and objects not found in the cache are fetched from the Internet by the proxy.
The following section focuses on the packet filter that comes with openSUSE Leap. For further information about packet filtering and firewalling, read the Firewall HOWTO.
firewalld #
firewalld is a daemon that maintains the system's
iptables rules and offers a D-Bus interface for
operating on them. It comes with a command line utility
firewall-cmd and a graphical user interface
firewall-config for interacting with it. Since
firewalld is running in the background and provides a well defined
interface it allows other applications to request changes to the iptables
rules, for example to setup virtual machine networking.
firewalld implements different security zones. A number of predefined
zones like internal or public exist.
The administrator can define additional custom zones if desired. Each
zone contains its own set of iptables rules. Each network interface is
member of exactly one zone. Individual connections can also be assigned to
a zone based on the source addresses.
Each zone represents a certain level of trust. For example the
public zone is not trusted, because other computers in
this network are not under your control (suitable for Internet or wireless
hotspot connections). On the other hand the internal
zone is used for networks that are under your control like a home or
company network. By utilizing zones this way a host can offer different
kinds of services to trusted networks and untrusted networks in a defined
way.
For more information about the predefined zones and their meaning in
firewalld refer to its manual at
http://www.firewalld.org/documentation/zone/predefined-zones.html.
The initial state for network interfaces is to be assigned to no zone at
all. In this case the network interface will be implicitly handled in the
default zone which can be determined by calling firewall-cmd
--get-default-zone. If not configured otherwise, the default
zone is the public zone.
The firewalld packet filtering model allows any outgoing connections to
pass. Outgoing connections are connections that are actively established by
the local host. Incoming connections that are established by remote hosts are
blocked, if the respective service is not allowed in the zone in
question. Therefore, each of the interfaces with incoming traffic must be
placed into a suitable zone to allow for the desired services to be
accessible. For each of the zones, define the services or protocols you
need.
An important concept of firewalld is the distinction between two separate
configurations, the runtime and the
permanent configuration. The runtime configuration
represents the currently active rules, while the permanent configuration
represents the saved rules that will be applied when restarting
firewalld. This allows to add temporary rules that will be discarded after
restart of firewalld, or to experiment with new rules while being able to
revert back to the original state. When you are changing the configuration
you need to be aware of which configuration you're editing. How this is
done is discussed in Section 15.4.1.2, “Runtime versus Permanent Configuration”.
If you want to perform the firewalld configuration using the graphical user
interface firewall-config then refer to its documentation
in
http://www.firewalld.org/documentation/utilities/firewall-config.html.
In the following section we will be looking at how to perform typical
firewalld configuration tasks using firewall-cmd on the
command line.
firewalld will be installed and enabled by default. It is a regular
systemd service that can be configured via systemctl
or the YaST Services Manager.
After the installation, YaST automatically starts
firewalld and leaves all interfaces in the default
public zone. If a server application is configured
and activated on the system, YaST can adjust the firewall
rules via the options or
in the server configuration modules. Some server module dialogs include a
button for activating additional
services and ports.
By default all firewall-cmd commands operate on the
runtime configuration. You can apply most operations to the permanent
configuration only by adding the
--permanent parameter. When doing so the change will
only affect the permanent configuration but will not be effective
immediatly in the runtime configuration. There is currently no way to add a
rule to both runtime and permanent configuration in a single invocation.
To achieve this you can apply all necessary changes to the runtime
configuration and when all is working as expected issue the following
command:
root #firewall-cmd --runtime-to-permanent
This will write all current runtime rules into the permanent configuration.
Any temporary modifications you or other programs may have made to the
firewall in other contexts are made permanent this way. If you're unsure
about this you can also take the opposite approach to be on the safe side:
Add new rules to the permanent configuration and reload firewalld to
make them active.
Some configuration items like the default zone are shared by both the runtime and permanent configuration. Changing them will reflect in both configurations at once.
To revert the runtime configuration to the permanent configuration and
thereby discard any temporary changes there exist two
possibilities, either via the firewalld command line interface or via
systemd:
root #firewall-cmd --reload
root #systemctl reload firewalld
For brevity the examples in the following sections will always operate on the runtime configuration, if applicable. Adjust them accordingly if you want to make them permanent.
You can list all network interfaces currently assigned to a zone like this:
root #firewall-cmd --zone=public --list-interfaceseth0
Similarly you can query which zone a specific interface is assigned to:
root #firewall-cmd --get-zone-of-interface=eth0public
The following command lines assign an interface to a zone. The variant
using --add-interface will only work if
eth0 is not already assigned to another zone. The
variant using --change-interface will always work,
removing eth0 from its current zone if necessary:
root #firewall-cmd --zone=internal --add-interface=eth0root #firewall-cmd --zone=internal --change-interface=eth0
Any operations without an explicit --zone argument will
impliticly operate on the default zone. This pair of commands can be used
for getting and setting the default zone assignment:
root #firewall-cmd --get-default-zonedmzroot #firewall-cmd --set-default-zone=public
Any network interfaces not explicitly assigned to a zone will be
automatically part of the default zone. Changing the default zone will
reassign all those network interfaces immediately for the permanent and
runtime configurations. You should never use a trusted zone like
internal as the default zone to avoid unexpected
exposure to threats. For example hotplugged network interfaces like USB
ethernet interfaces would automatically become part of the trusted zone in
such cases.
Also note that interfaces that are not explicitly part of any zone will not appear in the zone interface list. There is currently no command to list unassigned interfaces. Due to this it is best to avoid unassigned network interfaces during regular operation.
firewalld has a concept of services. A service
consists of definitions of ports and protocols. These definitions
logically belong together in the context of a given network service like
a web or mail server protocol. The following commands can be used to get
information about predefined services and their details:
root #firewall-cmd --get-services[...] dhcp dhcpv6 dhcpv6-client dns docker-registry [...]root #firewall-cmd --info-service dhcpdhcp ports: 67/udp protocols: source-ports: modules: destination:
These service definitions can be used for easily making the associated network functionality accessible in a zone. This command line will open the http web server port in the internal zone, for example:
root #firewall-cmd --add-service=http --zone=internal
The removal of a service from a zone is performed using the counterpart
command --remove-service. You can also define custom
services using the --new-service sub-command. Refer to
http://www.firewalld.org/documentation/howto/add-a-service.html
for more details on how to do this.
If you just want to open a single port by number you can use the following approach. This will open TCP port 8000 in the internal zone:
root #firewall-cmd --add-port=8000/tcp --zone=internal
For removal use the counterpart command --remove-port.
firewalld supports a --timeout parameter that allows
to open a service or port for a limited time duration. This can be helpful
for quick testing and makes sure that closing the service or port will not
be forgotten. To allow the imap service in the
internal zone for 5 minutes, you would call
root #firewall-cmd --add-service=imap --zone=internal --timeout=5m
firewalld offers a lockdown mode that prevents
changes to the firewall rules while being active. Since applications can
automatically change the firewall rules via the D-Bus interface and
depending on the PolicyKit rules regular users may be able to do the same
it can be helpful to prevent changes in some situations. You can find more
information about this in
https://fedoraproject.org/wiki/Features/FirewalldLockdown.
It is important to understand that the lockdown mode feature
provides no real security but merely protection against accidental or
benign attempts to change the firewall. The way the lockdown mode is
currently implemented in firewalld provides no security against malicious
intent as is pointed out in
http://seclists.org/oss-sec/2017/q3/139.
iptables Rules #
firewalld claims exclusive control over the host's
netfilter rules. You should never modify
firewall rules using other tools like iptables. Doing
so could confuse firewalld and break security or functionality.
If you need to add custom firewall rules that aren't covered by
firewalld features then there are two ways to do so. To directly pass
raw iptables syntax you can use the
--direct option. It expects the table, chain and
priority as initial arguments and the rest of the command line is passed
as is to iptables. The following example adds a
connection tracking rule for the forwarding filter table:
root #firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i eth0 -o eth1 \ -p tcp --dport 80 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT
Additionally, firewalld implements so called rich
rules, an extended syntax for specifying
iptables rules in an easier way. You can find the
syntax specification in
http://www.firewalld.org/documentation/man-pages/firewalld.richlanguage.html.
the following example drops all IPv4 packets originating from a certain
source address:
root #firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" \ source address="192.168.2.4" drop'
firewalld is not designed to run as a fully fledged router. The
basic functionality for typical home router setups is available. For a
corporate production router you should not use firewalld, however, but
use dedicated router and firewall devices instead. The following provides
just a few pointers what to look for to utilize routing in firewalld:
First of all IP forwarding needs to be enabled as outlined in Section 15.2, “Masquerading Basics”.
To enable IPv4 masquerading, for example in the
interal zone, issue the following command
root #firewall-cmd --zone=internal --add-masquerade
firewalld can also enable port forwarding. The following command will
forward local TCP connections on port 80 to another host:
root #firewall-cmd --zone=public \ --add-forward-port=port=80:proto=tcp:toport=80:toaddr=192.168.1.10
The most up-to-date information and other documentation about firewalld
package is found in /usr/share/doc/packages/firewalld.
The home page of the netfilter and iptables project, http://www.netfilter.org, provides a large collection of
documents about iptables in general in many languages.
Today, Internet connections are cheap and available almost everywhere. However, not all connections are secure. Using a Virtual Private Network (VPN), you can create a secure network within an insecure network such as the Internet or Wi-Fi. It can be implemented in different ways and serves several purposes. In this chapter, we focus on the OpenVPN implementation to link branch offices via secure wide area networks (WANs).
This section defines some terms regarding VPN and gives a brief overview of some scenarios.
The two “ends” of a tunnel, the source or destination client.
A tap device simulates an Ethernet device (layer 2 packets in the OSI model, such as IP packets). A tap device is used for creating a network bridge. It works with Ethernet frames.
A tun device simulates a point-to-point network (layer 3 packets in the OSI model, such as Ethernet frames). A tun device is used with routing and works with IP frames.
Linking two locations through a primarily public network. From a more technical viewpoint, it is a connection between the client's device and the server's device. Usually a tunnel is encrypted, but it does need to be by definition.
Whenever you set up a VPN connection, your IP packets are transferred over a secured tunnel. A tunnel can use either a tun or tap device. They are virtual network kernel drivers which implement the transmission of Ethernet frames or IP frames/packets.
Any user space program, such as OpenVPN, can attach itself to a tun or tap device to receive packets sent by your operating system. The program is also able to write packets to the device.
There are many solutions to set up and build a VPN connection. This section focuses on the OpenVPN package. Compared to other VPN software, OpenVPN can be operated in two modes:
Routing is an easy solution to set up. It is more efficient and scales better than a bridged VPN. Furthermore, it allows the user to tune MTU (Maximum Transfer Unit) to raise efficiency. However, in a heterogeneous environment, if you do not have a Samba server on the gateway, NetBIOS broadcasts do not work. If you need IPv6, the drivers for the tun devices on both ends must support this protocol explicitly. This scenario is depicted in Figure 16.1, “Routed VPN”.
Bridging is a more complex solution. It is recommended when you need to browse Windows file shares across the VPN without setting up a Samba or WINS server. Bridged VPN is also needed to use non-IP protocols (such as IPX) or applications relying on network broadcasts. However, it is less efficient than routed VPN. Another disadvantage is that it does not scale well. This scenario is depicted in the following figures.
The major difference between bridging and routing is that a routed VPN cannot IP-broadcast while a bridged VPN can.
In the following example, we will create a point-to-point VPN tunnel. The
example demonstrates how to create a VPN tunnel between one client and a
server. It is assumed that your VPN server will use private IP addresses
like IP_OF_SERVER
and your client will use the IP address
IP_OF_CLIENT.
Make sure you select addresses which do not conflict with other IP addresses.
This following scenario is provided as an example meant for familiarizing yourself with VPN technology. Do not use this as a real world scenario, as it can compromise the security and safety of your IT infrastructure!
To simplify working with OpenVPN configuration files, we recommend the following:
Place your OpenVPN configuration files in the directory
/etc/openvpn.
Name your configuration files
MY_CONFIGURATION.conf.
If there are multiple files that belong to the same configuration, place
them in a subdirectory like
/etc/openvpn/MY_CONFIGURATION.
To configure a VPN server, proceed as follows:
Install the package openvpn
on the machine that will later become your VPN server.
Open a shell, become root and create the VPN secret key:
root # openvpn --genkey --secret /etc/openvpn/secret.keyCopy the secret key to your client:
root # scp /etc/openvpn/secret.key root@IP_OF_CLIENT:/etc/openvpn/
Create the file /etc/openvpn/server.conf with the
following content:
dev tun ifconfig IP_OF_SERVER IP_OF_CLIENT secret secret.key
Set up a tun device configuration by creating a file called
/etc/sysconfig/network/ifcfg-tun0 with the following
content:
STARTMODE='manual' BOOTPROTO='static' TUNNEL='tun' TUNNEL_SET_OWNER='nobody' TUNNEL_SET_GROUP='nobody' LINK_REQUIRED=no PRE_UP_SCRIPT='systemd:openvpn@server' PRE_DOWN_SCRIPT='systemd:openvpn@service'
The notation openvpn@server points to the OpenVPN
server configuration file located at
/etc/openvpn/server.conf. For more information, see
/usr/share/doc/packages/openvpn/README.SUSE.
If you use a firewall, start YaST and open UDP port 1194 ( › › ).
Start the OpenVPN server service by setting the tun device to
up:
tux >sudowicked ifup tun0
You should see the confirmation:
tun0 up
To configure the VPN client, do the following:
Install the package openvpn
on your client VPN machine.
Create /etc/openvpn/client.conf with the
following content:
remote DOMAIN_OR_PUBLIC_IP_OF_SERVER dev tun ifconfig IP_OF_CLIENT IP_OF_SERVER secret secret.key
Replace the placeholder IP_OF_CLIENT in the first line with either the domain name, or the public IP address of your server.
Set up a tun device configuration by creating a file called
/etc/sysconfig/network/ifcfg-tun0 with the following
content:
STARTMODE='manual' BOOTPROTO='static' TUNNEL='tun' TUNNEL_SET_OWNER='nobody' TUNNEL_SET_GROUP='nobody' LINK_REQUIRED=no PRE_UP_SCRIPT='systemd:openvpn@client' PRE_DOWN_SCRIPT='systemd:openvpn@client'
If you use a firewall, start YaST and open UDP port 1194 as described in Step 6 of Procedure 16.1, “VPN Server Configuration”.
Start the OpenVPN server service by setting the tun device to
up:
tux >sudowicked ifup tun0
You should see the confirmation:
tun0 up
After OpenVPN has successfully started, test the availability of the tun device with the following command:
ip addr show tun0
To verify the VPN connection, use ping on both client
and server side to see if they can reach each other. Ping the server
from the client:
ping -I tun0 IP_OF_SERVER
Ping the client from the server:
ping -I tun0 IP_OF_CLIENT
The example in Section 16.2 is useful for testing, but not for daily work. This section explains how to build a VPN server that allows more than one connection at the same time. This is done with a public key infrastructure (PKI). A PKI consists of a pair of public and private keys for the server and each client, and a master certificate authority (CA), which is used to sign every server and client certificate.
This setup involves the following basic steps:
Before a VPN connection can be established, the client must authenticate the server certificate. Conversely, the server must also authenticate the client certificate. This is called mutual authentication. To create such certificates, use the YaST CA module. See Chapter 17, Managing X.509 Certification for more details.
To create a VPN root, server, and client CA, proceed as follows:
Prepare a common VPN Certificate Authority (CA):
Start the YaST CA module.
Click .
Enter a and a , for example VPN-Server-CA.
Fill out the other boxes like e-mail addresses, organization, etc. and proceed with .
Enter your password twice and proceed with .
Review the summary. YaST displays the current settings for confirmation. Click . The root CA is created and displayed in the overview.
Create a VPN server certificate:
Select the root CA you created in Step 1 and click .
When prompted, enter the .
Click the tab and click › .
Specify a , for example,
openvpn.example.com
and proceed with .
Specify your password and confirm it. Then click .
Switch to the › list and check one of the following sets:
digitalSignature and
keyEncipherment, or,
digitalSignature and
keyAgreement
Switch to the › and type
serverAuth for a server certificate.
If you are using the method remote-cert-tls server or
remote-cert-tls client to verify certificates, limit
the number of times a key can be used. This mitigates
man-in-the-middle attacks.
For more information, see http://openvpn.net/index.php/open-source/documentation/howto.html#mitm.
Finish with and proceed with .
Review the summary. YaST displays the current settings for confirmation. Click . When the VPN server certificate is created, it is displayed in the tab.
Create VPN client certificates:
Make sure you are on the tab.
Click › .
Enter a , for example,
client1.example.com.
Enter the e-mail addresses for your client, for example,
user1@client1.example.com,
and click . Proceed with
.
Enter your password twice and click .
Switch to › list and check one of the following flags:
digitalSignature or,
keyAgreement or,
digitalSignature and
keyAgreement.
Switch to the › and type
clientAuth for a server certificate.
Review the summary. YaST displays the current settings for confirmation. Click . The VPN client certificate is created and is displayed in the tab.
If you need certificates for more clients, repeat Step 3.
After you have successfully finished Procedure 16.3, “Creating a VPN Server Certificate” you have a VPN root CA, a VPN server CA, and one or more VPN client CAs. To finish the task, proceed with the following procedure:
Choose the tab.
Export the VPN server certificate in two formats: PEM and unencrypted key in PEM.
Export the VPN client certificates and choose an export format, PEM or PKCS12 (preferred). For each client:
Select your VPN client certificate
(client1.example.com
in our example) and choose › .
Select , enter your VPN client certificate key
password and provide a PKCS12 password. Enter a
, click
and save the file to
/etc/openvpn/client1.p12.
Copy the files to your client (in our example,
client1.example.com).
Export the VPN CA (in our example
VPN-Server-CA):
Switch to the tab.
Select › .
Mark and save
the file to /etc/openvpn/vpn_ca.pem.
If desired, the client PKCS12 file can be converted into the PEM format using this command:
openssl pkcs12 -in client1.p12 -out client1.pem
Enter your client password to create the
client1.pem file. The PEM file contains the client
certificate, client key, and the CA certificate. You can split this
combined file using a text editor and create three separate files. The
file names can be used for the ca,
cert, and key options in the OpenVPN
configuration file (see Example 16.1, “VPN Server Configuration File”).
As the basis of your configuration file, copy
/usr/share/doc/packages/openvpn/sample-config-files/server.conf
to /etc/openvpn/. Then customize it to your needs.
# /etc/openvpn/server.conf port 1194 1 proto udp 2 dev tun0 3 # Security 4 ca vpn_ca.pem cert server_crt.pem key server_key.pem # ns-cert-type server remote-cert-tls client 5 dh server/dh2048.pem 6 server 192.168.1.0 255.255.255.0 7 ifconfig-pool-persist /var/run/openvpn/ipp.txt 8 # Privileges 9 user nobody group nobody # Other configuration 10 keepalive 10 120 comp-lzo persist-key persist-tun # status /var/log/openvpn-status.tun0.log 11 # log-append /var/log/openvpn-server.log 12 verb 4
The TCP/UDP port on which OpenVPN listens. You need to open the port in the firewall, see Chapter 15, Masquerading and Firewalls. The standard port for VPN is 1194, so you can usually leave that as it is. | |
The protocol, either UDP or TCP. | |
The tun or tap device. For the difference between these, see Section 16.1.1, “Terminology”. | |
The following lines contain the relative or absolute path to the root
server CA certificate ( | |
Require that peer certificates have been signed with an explicit key usage and extended key usage based on RFC3280 TLS rules. There is a description of how to make a server use this explicit key in Procedure 16.3, “Creating a VPN Server Certificate”. | |
The Diffie-Hellman parameters. Create the required file with the following command: openssl dhparam -out /etc/openvpn/dh2048.pem 2048 | |
Supplies a VPN subnet. The server can be reached by
| |
Records a mapping of clients and its virtual IP address in the given file. Useful when the server goes down and (after the restart) the clients get their previously assigned IP address. | |
For security reasons, run the OpenVPN daemon with reduced privileges. To
do so, specify that it should use the group and user
| |
Several other configuration options—see the comment in the
example configuration file:
| |
Enable this option to write short status updates with statistical data (“operational status dump”) to the named file. By default, this is not enabled.
All output is written to syslog. If you have more than one
configuration file (for example, one for home and another for work), it
is recommended to include the device name into the file name. This
avoids overwriting output files accidentally. In this case,
it is | |
By default, log messages go to syslog. Overwrite this behavior by
removing the hash character. In that case, all messages go to
|
After having completed this configuration, you can see log messages of
your OpenVPN server under /var/log/openvpn.log.
After having started it for the first time, it should finish with:
... Initialization Sequence Completed
If you do not see this message, check the log carefully for any hints of what is wrong in your configuration file.
As the basis of your configuration file, copy
/usr/share/doc/packages/openvpn/sample-config-files/client.conf
to /etc/openvpn/. Then customize it to your needs.
# /etc/openvpn/client.conf client 1 dev tun 2 proto udp 3 remote IP_OR_HOST_NAME 1194 4 resolv-retry infinite nobind remote-cert-tls server 5 # Privileges 6 user nobody group nobody # Try to preserve some state across restarts. persist-key persist-tun # Security 7 pkcs12 client1.p12 comp-lzo 8
Specifies that this machine is a client. | |
The network device. Both clients and server must use the same device. | |
The protocol. Use the same settings as on the server. | |
This is security option for clients which ensures that the host they connect to is a designated server. | |
Replace the placeholder IP_OR_HOST_NAME
with the respective host name or IP address of your VPN server. After
the host name, the port of the server is given. You can have multiple
lines of | |
For security reasons, run the OpenVPN daemon with reduced privileges. To
do so, specify that it should use the group and user
| |
Contains the client files. For security reasons, use a separate pair of files for each client. | |
Turn on compression. Only use this parameter if compression is enabled on the server as well. |
You can also use YaST to set up a VPN server. However, the YaST module does not support OpenVPN. Instead, it provides support for the IPsec protocol (as implemented in the software StrongSwan). Like OpenVPN, IPsec is a widely supported VPN scheme.
To start the YaST VPN module, select › .
Under , activate .
To create a new VPN, click , then enter a name for the connection.
Under , select .
Then choose the scenario:
The scenarios and are best suited to Linux client setups.
The scenario sets up a configuration that is natively supported by modern versions of Android, iOS, and macOS. It is based on a pre-shared key setup with an additional user name and password authentication.
The scenario is a configuration that is natively supported by Windows and BlackBerry devices. It is based on a certificate setup with an additional user name and password authentication.
For this example, choose .
To specify the key, click . Activate , then type the secret key. Confirm with .
Choose whether and how to limit access within your VPN under . To enable only certain IP ranges, specify these in CIDR format, separated by commas in . For more information about the CIDR format, see https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing.
Under , specify the format of IP addresses your VPN should provide to its clients.
To finish, click . The YaST VPN module will now automatically add and enable firewall rules to allow clients to connect to the new VPN.
To view the connection status,
in the following confirmation window, click .
You will then see the output of
systemctl status for your VPN, which allows you to check
if the VPN is running and configured correctly.
For more information on setting up a VPN connection using NetworkManager, see Section 28.3.4, “NetworkManager and VPN”.
For more information about VPN in general, see:
http://www.openvpn.net: the OpenVPN home page
man openvpn
/usr/share/doc/packages/openvpn/sample-config-files/:
example configuration files for different scenarios.
/usr/src/linux/Documentation/networking/tuntap.txt,
to install the kernel-source
package.
An increasing number of authentication mechanisms are based on cryptographic procedures. Digital certificates that assign cryptographic keys to their owners play an important role in this context. These certificates are used for communication and can also be found, for example, on company ID cards. The generation and administration of certificates is mostly handled by official institutions that offer this as a commercial service. In some cases, however, it may make sense to carry out these tasks yourself. For example, if a company does not want to pass personal data to third parties.
YaST provides two modules for certification, which offer basic management functions for digital X.509 certificates. The following sections explain the basics of digital certification and how to use YaST to create and administer certificates of this type.
Digital certification uses cryptographic processes to encrypt and protect data from access by unauthorized people. The user data is encrypted using a second data record, or key. The key is applied to the user data in a mathematical process, producing an altered data record in which the original content can no longer be identified. Asymmetrical encryption is now in general use (public key method). Keys always occur in pairs:
The private key must be kept safely by the key owner. Accidental publication of the private key compromises the key pair and renders it useless.
The key owner circulates the public key for use by third parties.
Because the public key process is in widespread use, there are many public keys in circulation. Successful use of this system requires that every user be sure that a public key actually belongs to the assumed owner. The assignment of users to public keys is confirmed by trustworthy organizations with public key certificates. Such certificates contain the name of the key owner, the corresponding public key, and the electronic signature of the person issuing the certificate.
Trustworthy organizations that issue and sign public key certificates are usually part of a certification infrastructure. This is responsible for the other aspects of certificate management, such as publication, withdrawal, and renewal of certificates. An infrastructure of this kind is generally called a public key infrastructure or PKI. One familiar PKI is the OpenPGP standard in which users publish their certificates themselves without central authorization points. These certificates become trustworthy when signed by other parties in the “web of trust.”
The X.509 Public Key Infrastructure (PKIX) is an alternative model defined by the IETF (Internet Engineering Task Force) that serves as a model for almost all publicly-used PKIs today. In this model, authentication is made by certificate authorities (CA) in a hierarchical tree structure. The root of the tree is the root CA, which certifies all sub-CAs. The lowest level of sub-CAs issue user certificates. The user certificates are trustworthy by certification that can be traced to the root CA.
The security of such a PKI depends on the trustworthiness of the CA certificates. To make certification practices clear to PKI customers, the PKI operator defines a certification practice statement (CPS) that defines the procedures for certificate management. This should ensure that the PKI only issues trustworthy certificates.
An X.509 certificate is a data structure with several fixed fields and, optionally, additional extensions. The fixed fields mainly contain the name of the key owner, the public key, and the data relating to the issuing CA (name and signature). For security reasons, a certificate should only have a limited period of validity, so a field is also provided for this date. The CA guarantees the validity of the certificate in the specified period. The CPS usually requires the PKI (the issuing CA) to create and distribute a new certificate before expiration.
The extensions can contain any additional information. An application is only required to be able to evaluate an extension if it is identified as critical. If an application does not recognize a critical extension, it must reject the certificate. Some extensions are only useful for a specific application, such as signature or encryption.
Table 17.1 shows the fields of a basic X.509 certificate in version 3.
|
Field |
Content |
|---|---|
|
Version |
The version of the certificate, for example, v3 |
|
Serial Number |
Unique certificate ID (an integer) |
|
Signature |
The ID of the algorithm used to sign the certificate |
|
Issuer |
Unique name (DN) of the issuing authority (CA) |
|
Validity |
Period of validity |
|
Subject |
Unique name (DN) of the owner |
|
Subject Public Key Info |
Public key of the owner and the ID of the algorithm |
|
Issuer Unique ID |
Unique ID of the issuing CA (optional) |
|
Subject Unique ID |
Unique ID of the owner (optional) |
|
Extensions |
Optional additional information, such as “KeyUsage” or “BasicConstraints” |
If a certificate becomes untrustworthy before it has expired, it must be blocked immediately. This can become necessary if, for example, the private key has accidentally been made public. Blocking certificates is especially important if the private key belongs to a CA rather than a user certificate. In this case, all user certificates issued by the relevant CA must be blocked immediately. If a certificate is blocked, the PKI (the responsible CA) must make this information available to all those involved using a certificate revocation list (CRL).
These lists are supplied by the CA to public CRL distribution points (CDPs) at regular intervals. The CDP can optionally be named as an extension in the certificate, so a checker can fetch a current CRL for validation purposes. One way to do this is the online certificate status protocol (OCSP). The authenticity of the CRLs is ensured with the signature of the issuing CA. Table 17.2 shows the basic parts of a X.509 CRL.
|
Field |
Content |
|---|---|
|
Version |
The version of the CRL, such as v2 |
|
Signature |
The ID of the algorithm used to sign the CRL |
|
Issuer |
Unique name (DN) of the publisher of the CRL (usually the issuing CA) |
|
This Update |
Time of publication (date, time) of this CRL |
|
Next Update |
Time of publication (date, time) of the next CRL |
|
List of revoked certificates |
Every entry contains the serial number of the certificate, the time of revocation, and optional extensions (CRL entry extensions) |
|
Extensions |
Optional CRL extensions |
The certificates and CRLs for a CA must be made publicly accessible using a repository. Because the signature protects the certificates and CRLs from being forged, the repository itself does not need to be secured in a special way. Instead, it tries to grant the simplest and fastest access possible. For this reason, certificates are often provided on an LDAP or HTTP server. Find explanations about LDAP in Chapter 5, LDAP—A Directory Service. Chapter 24, The Apache HTTP Server contains information about the HTTP server.
YaST contains modules for the basic management of X.509 certificates. This mainly involves the creation of CAs, sub-CAs, and their certificates. The services of a PKI go far beyond simply creating and distributing certificates and CRLs. The operation of a PKI requires a well-conceived administrative infrastructure allowing continuous update of certificates and CRLs. This infrastructure is provided by commercial PKI products and can also be partly automated. YaST provides tools for creating and distributing CAs and certificates, but cannot currently offer this background infrastructure. To set up a small PKI, you can use the available YaST modules. However, you should use commercial products to set up an “official” or commercial PKI.
YaST provides two modules for basic CA management. The primary management tasks with these modules are explained here.
The first step when setting up a PKI is to create a root CA. Do the following:
Start YaST and go to › .
Click .
Enter the basic data for the CA in the first dialog, shown in Figure 17.1. The text boxes have the following meanings:
Enter the technical name of the CA. Directory names, among other things, are derived from this name, which is why only the characters listed in the help can be used. The technical name is also displayed in the overview when the module is started.
Enter the name for use in referring to the CA.
Several e-mail addresses can be entered that can be seen by the CA user. This can be helpful for inquiries.
Select the country where the CA is operated.
Optional values
Proceed with .
Enter a password in the second dialog. This password is always required when using the CA—when creating a sub-CA or generating certificates. The text boxes have the following meaning:
contains a meaningful default and does not generally need to be changed unless an application cannot deal with this key length. The higher the number the more secure your password is.
The in the case of a CA defaults to 3650 days (roughly ten years). This long period makes sense because the replacement of a deleted CA involves an enormous administrative effort.
Clicking opens a dialog for setting different attributes from the X.509 extensions (Figure 17.4, “YaST CA Module—Extended Settings”). These values have rational default settings and should only be changed if you are really sure of what you are doing. Proceed with .
Review the summary. YaST displays the current settings for confirmation. Click . The root CA is created then appears in the overview.
In general, it is best not to allow user certificates to be issued by the root CA. It is better to create at least one sub-CA and create the user certificates from there. This has the advantage that the root CA can be kept isolated and secure, for example, on an isolated computer on secure premises. This makes it very difficult to attack the root CA.
If you need to change your password for your CA, proceed as follows:
Start YaST and open the CA module.
Select the required root CA and click .
Enter the password if you entered a CA the first time. YaST displays the CA key information in the tab (see Figure 17.2).
Click and select . A dialog opens.
Enter the old and the new password.
Finish with
A sub-CA is created in the same way as a root CA.
The validity period for a sub-CA must be fully within the validity period of the “parent” CA. A sub-CA is always created after the “parent” CA, therefore, the default value leads to an error message. To avoid this, enter a permissible value for the period of validity.
Do the following:
Start YaST and open the CA module.
Select the required root CA and click .
Enter the password if you are entering a CA for the first time. YaST displays the CA key information in the tab (see Figure 17.2).
Click and select . This opens the same dialog as for creating a root CA.
Proceed as described in Section 17.2.1, “Creating a Root CA”.
It is possible to use one password for all your CAs. Enable to give your sub-CAs the same password as your root CA. This helps to reduce the amount of passwords for your CAs.
Take into account that the valid period must be lower than the valid period in the root CA.
Select the tab. Reset compromised or otherwise unwanted sub-CAs here, using . Revocation alone is not enough to deactivate a sub-CA. You must also publish revoked sub-CAs in a CRL. The creation of CRLs is described in Section 17.2.6, “Creating Certificate Revocation Lists (CRLs)”.
Finish with .
Creating client and server certificates is very similar to creating CAs in Section 17.2.1, “Creating a Root CA”. The same principles apply here. In certificates intended for e-mail signature, the e-mail address of the sender (the private key owner) should be contained in the certificate to enable the e-mail program to assign the correct certificate.
For certificate assignment during encryption, it is necessary for the e-mail address of the recipient (the public key owner) to be included in the certificate. In the case of server and client certificates, the host name of the server must be entered in the field. The default validity period for certificates is 365 days.
To create client and server certificates, do the following:
Start YaST and open the CA module.
Select the required root CA and click .
Enter the password if you are entering a CA for the first time. YaST displays the CA key information in the tab.
Click (see Figure 17.3).
Click › and create a server certificate.
Click › and create a client certificate. Do not forget to enter an e-mail address.
Finish with
To revoke compromised or otherwise unwanted certificates, do the following:
Start YaST and open the CA module.
Select the required root CA and click .
Enter the password if you are entering a CA for the first time. YaST displays the CA key information in the tab.
Click (see Section 17.2.3, “Creating or Revoking a Sub-CA”).
Select the certificate to revoke and click .
Choose a reason to revoke this certificate.
Finish with .
Revocation alone is not enough to deactivate a certificate. Also publish revoked certificates in a CRL. Section 17.2.6, “Creating Certificate Revocation Lists (CRLs)” explains how to create CRLs. Revoked certificates can be completely removed after publication in a CRL with .
The previous sections explained how to create sub-CAs, client certificates, and server certificates. Special settings are used in the extensions of the X.509 certificate. These settings have been given rational defaults for every certificate type and do not normally need to be changed. However, it may be that you have special requirements for these extensions. In this case, it may make sense to adjust the defaults. Otherwise, start from scratch every time you create a certificate.
Start YaST and open the CA module.
Enter the required root CA, as described in Section 17.2.3, “Creating or Revoking a Sub-CA”.
Click › .
Choose type of certificate to change and proceed with .
The dialog for changing the defaults as shown in Figure 17.4, “YaST CA Module—Extended Settings” opens.
Change the associated value on the right side and set or delete the critical setting with .
Click to see a short summary.
Finish your changes with .
All changes to the defaults only affect objects created after this point. Already-existing CAs and certificates remain unchanged.
If compromised or otherwise unwanted certificates need to be excluded from further use, they must first be revoked. The procedure for this is explained in Section 17.2.3, “Creating or Revoking a Sub-CA” (for sub-CAs) and Section 17.2.4, “Creating or Revoking User Certificates” (for user certificates). After this, a CRL must be created and published with this information.
The system maintains only one CRL for each CA. To create or update this CRL, do the following:
Start YaST and open the CA module.
Enter the required CA, as described in Section 17.2.3, “Creating or Revoking a Sub-CA”.
Click . The dialog that opens displays a summary of the last CRL of this CA.
Create a new CRL with if you have revoked new sub-CAs or certificates since its creation.
Specify the period of validity for the new CRL (default: 30 days).
Click to create and display the CRL. Afterward, you must publish this CRL.
Applications that evaluate CRLs reject every certificate if the CRL is not available or has expired. As a PKI provider, it is your duty always to create and publish a new CRL before the current CRL expires (period of validity). YaST does not provide a function for automating this procedure.
The executing computer should be configured with the YaST LDAP client for LDAP export. This provides LDAP server information at runtime that can be used when completing dialog fields. Otherwise (although export may be possible), all LDAP data must be entered manually. You must always enter several passwords (see Table 17.3, “Passwords during LDAP Export”).
|
Password |
Meaning |
|---|---|
|
LDAP Password |
Authorizes the user to make entries in the LDAP tree. |
|
Certificate Password |
Authorizes the user to export the certificate. |
|
New Certificate Password |
The PKCS12 format is used during LDAP export. This format forces the assignment of a new password for the exported certificate. |
Certificates, CAs, and CRLs can be exported to LDAP.
To export a CA, enter the CA as described in Section 17.2.3, “Creating or Revoking a Sub-CA”. Select › in the subsequent dialog, which opens the dialog for entering LDAP data. If your system has been configured with the YaST LDAP client, the fields are already partly completed. Otherwise, enter all the data manually. Entries are made in LDAP in a separate tree with the attribute “caCertificate”.
Enter the CA containing the certificate to export then select . Select the required certificate from the certificate list in the upper part of the dialog and select › . The LDAP data is entered here in the same way as for CAs. The certificate is saved with the corresponding user object in the LDAP tree with the attributes “userCertificate” (PEM format) and “userPKCS12” (PKCS12 format).
Enter the CA containing the CRL to export and select . If desired, create a new CRL and click . The dialog that opens displays the export parameters. You can export the CRL for this CA either once or in periodical time intervals. Activate the export by selecting and enter the respective LDAP data. To do this at regular intervals, select the radio button and change the interval, if appropriate.
If you have set up a repository on the computer for administering CAs, you can use this option to create the CA objects directly as a file at the correct location. Different output formats are available, such as PEM, DER, and PKCS12. In the case of PEM, it is also possible to choose whether a certificate should be exported with or without key and whether the key should be encrypted. In the case of PKCS12, it is also possible to export the certification path.
Export a file in the same way for certificates, CAs as with LDAP, described in Section 17.2.7, “Exporting CA Objects to LDAP”, except you should select instead of . This then takes you to a dialog for selecting the required output format and entering the password and file name. The certificate is stored at the required location after clicking .
For CRLs click , select , choose the export format (PEM or DER) and enter the path. Proceed with to save it to the respective location.
You can select any storage location in the file system. This option can
also be used to save CA objects on a transport medium, such as a flash
disk. The /media directory generally holds any
type of drive except the hard disk of your system.
If you have exported a server certificate with YaST to your media on an isolated CA management computer, you can import this certificate on a server as a common server certificate. Do this during installation or at a later point with YaST.
You need one of the PKCS12 formats to import your certificate successfully.
The general server certificate is stored in
/etc/ssl/servercerts and can be used there by any
CA-supported service. When this certificate expires, it can easily be
replaced using the same mechanisms. To get things functioning with the
replaced certificate, restart the participating services.
If you select here, you can select the source in the file system. This option can also be used to import certificates from removable media, such as a flash disk.
To import a common server certificate, do the following:
Start YaST and open under
View the data for the current certificate in the description field after YaST has been started.
Select and the certificate file.
Enter the password and click . The certificate is imported then displayed in the description field.
Close YaST with .
Many security vulnerabilities result from bugs in trusted programs. A trusted program runs with privileges that attackers want to possess. The program fails to keep that trust if there is a bug in the program that allows the attacker to acquire said privilege.
Prepare a successful deployment of AppArmor on your system by carefully considering the following items:
Effective hardening of a computer system requires minimizing the number of programs that mediate privilege, then securing the programs as much as possible. With AppArmor, you only need to profile the programs that are exposed to attack in your environment, which drastically reduces the amount of wor…
Building AppArmor profiles to confine an application is very straightforward and intuitive. AppArmor ships with several tools that assist in profile creation. It does not require you to do any programming or script handling. The only task that is required of the administrator is to determine a polic…
AppArmor ships with a set of profiles enabled by default. These are created by the AppArmor developers, and are stored in /etc/apparmor.d. In addition to these profiles, openSUSE Leap ships profiles for individual applications together with the relevant application. These profiles are not enabled by…
YaST provides a basic way to build profiles and manage AppArmor® profiles. It provides two interfaces: a graphical one and a text-based one. The text-based interface consumes less resources and bandwidth, making it a better choice for remote administration, or for times when a local graphical enviro…
AppArmor® provides the user the ability to use a command line interface rather than a graphical interface to manage and configure the system security. Track the status of AppArmor and create, delete, or modify AppArmor profiles using the AppArmor command line tools.
An AppArmor® profile represents the security policy for an individual program instance or process. It applies to an executable program, but if a portion of the program needs different access permissions than other portions, the program can “change hats” to use a different security context, distincti…
pam_apparmorAn AppArmor profile applies to an executable program; if a portion of the program needs different access permissions than other portions need, the program can change hats via change_hat to a different role, also known as a subprofile. The pam_apparmor PAM module allows applications to confine authen…
After creating profiles and immunizing your applications, openSUSE® Leap becomes more efficient and better protected as long as you perform AppArmor® profile maintenance (which involves analyzing log files, refining your profiles, backing up your set of profiles and keeping it up-to-date). You can d…
This chapter outlines maintenance-related tasks. Learn how to update AppArmor® and get a list of available man pages providing basic help for using the command line tools provided by AppArmor. Use the troubleshooting section to learn about some common problems encountered with AppArmor and their sol…
Many security vulnerabilities result from bugs in trusted programs. A trusted program runs with privileges that attackers want to possess. The program fails to keep that trust if there is a bug in the program that allows the attacker to acquire said privilege.
AppArmor® is an application security solution designed specifically to apply privilege confinement to suspect programs. AppArmor allows the administrator to specify the domain of activities the program can perform by developing a security profile. A security profile is a listing of files that the program may access and the operations the program may perform. AppArmor secures applications by enforcing good application behavior without relying on attack signatures, so it can prevent attacks even if previously unknown vulnerabilities are being exploited.
AppArmor consists of:
A library of AppArmor profiles for common Linux* applications, describing what files the program needs to access.
A library of AppArmor profile foundation classes (profile building blocks) needed for common application activities, such as DNS lookup and user authentication.
A tool suite for developing and enhancing AppArmor profiles, so that you can change the existing profiles to suit your needs and create new profiles for your own local and custom applications.
Several specially modified applications that are AppArmor enabled to provide enhanced security in the form of unique subprocess confinement (including Apache).
The AppArmor-related kernel code and associated control scripts to enforce AppArmor policies on your openSUSE® Leap system.
For more information about the science and security of AppArmor, refer to the following papers:
Describes the initial design and implementation of AppArmor. Published in the proceedings of the USENIX LISA Conference, December 2000, New Orleans, LA. This paper is now out of date, describing syntax and features that are different from the current AppArmor product. This paper should be used only for background, and not for technical documentation.
A good guide to strategic and tactical use of AppArmor to solve severe security problems in a very short period of time. Published in the Proceedings of the DARPA Information Survivability Conference and Expo (DISCEX III), April 2003, Washington, DC.
This document tries to convey a better understanding of the technical details of AppArmor. It is available at http://en.opensuse.org/SDB:AppArmor_geeks.
Prepare a successful deployment of AppArmor on your system by carefully considering the following items:
Determine the applications to profile. Read more on this in Section 19.3, “Choosing Applications to Profile”.
Build the needed profiles as roughly outlined in Section 19.4, “Building and Modifying Profiles”. Check the results and adjust the profiles when necessary.
Update your profiles whenever your environment changes or you need to react to security events logged by the reporting tool of AppArmor. Refer to Section 19.5, “Updating Your Profiles”.
AppArmor is installed and running on any installation of openSUSE® Leap by default, regardless of what patterns are installed. The packages listed below are needed for a fully-functional instance of AppArmor:
apparmor-docs
apparmor-parser
apparmor-profiles
apparmor-utils
audit
libapparmor1
perl-libapparmor
yast2-apparmor
If AppArmor is not installed on your system, install the pattern
apparmor for a complete
AppArmor installation. Either use the YaST Software Management
module for installation, or use Zypper on the command line:
tux >sudozypper in -t pattern apparmor
AppArmor is configured to run by default on any fresh installation of openSUSE Leap. There are two ways of toggling the status of AppArmor:
Disable or enable AppArmor by removing or adding its boot script to the sequence of scripts executed on system boot. Status changes are applied on reboot.
Toggle the status of AppArmor in a running system by switching it off or on using the YaST AppArmor Control Panel. Changes made here are applied instantaneously. The Control Panel triggers a stop or start event for AppArmor and removes or adds its boot script in the system's boot sequence.
To disable AppArmor permanently (by removing it from the sequence of scripts executed on system boot) proceed as follows:
Start YaST.
Select › .
Mark apparmor by clicking its row in the list of
services, then click in the lower
part of the window. Check that changed to
in the apparmor row.
Confirm with .
AppArmor will not be initialized on reboot, and stays inactive until you re-enable it. Re-enabling a service using the YaST tool is similar to disabling it.
Toggle the status of AppArmor in a running system by using the AppArmor Configuration window. These changes take effect when you apply them and survive a reboot of the system. To toggle the status of AppArmor, proceed as follows:
Start YaST, select , and click in the main window.
Enable AppArmor by checking or disable AppArmor by deselecting it.
Click in the window.
You only need to protect the programs that are exposed to attacks in your particular setup, so only use profiles for those applications you actually run. Use the following list to determine the most likely candidates:
| Network Agents |
| Web Applications |
| Cron Jobs |
To find out which processes are currently running with open network ports
and might need a profile to confine them, run
aa-unconfined as root.
aa-unconfined #19848 /usr/sbin/cupsd not confined 19887 /usr/sbin/sshd not confined 19947 /usr/lib/postfix/master not confined 1328 /usr/sbin/smbd confined by '/usr/sbin/smbd (enforce)'
Each of the processes in the above example labeled not
confined might need a custom profile to confine it. Those
labeled confined by are already protected by AppArmor.
For more information about choosing the right applications to profile, refer to Section 20.2, “Determining Programs to Immunize”.
AppArmor on openSUSE Leap ships with a preconfigured set of profiles for the most important applications. In addition, you can use AppArmor to create your own profiles for any application you want.
There are two ways of managing profiles. One is to use the graphical front-end provided by the YaST AppArmor modules and the other is to use the command line tools provided by the AppArmor suite itself. The main difference is that YaST supports only basic functionality for AppArmor profiles, while the command line tools let you update/tune the profiles in a more fine-grained way.
For each application, perform the following steps to create a profile:
As root, let AppArmor create a rough outline of the
application's profile by running aa-genprof
PROGRAM_NAME.
or
Outline the basic profile by running › › › and specifying the complete path to the application you want to profile.
A new basic profile is outlined and put into learning mode, which means that it logs any activity of the program you are executing, but does not yet restrict it.
Run the full range of the application's actions to let AppArmor get a very specific picture of its activities.
Let AppArmor analyze the log files generated in Step 2 by typing S in aa-genprof.
AppArmor scans the logs it recorded during the application's run and asks you to set the access rights for each event that was logged. Either set them for each file or use globbing.
Depending on the complexity of your application, it might be necessary to repeat Step 2 and Step 3. Confine the application, exercise it under the confined conditions, and process any new log events. To properly confine the full range of an application's capabilities, you might be required to repeat this procedure often.
When you finish aa-genprof, your profile is set to
enforce mode. The profile is applied and AppArmor restricts the
application according to it.
If you started aa-genprof on an application that had
an existing profile that was in complain mode, this profile remains in
learning mode upon exit of this learning cycle. For more information
about changing the mode of a profile, refer to
Section 24.7.3.2, “aa-complain—Entering Complain or Learning Mode”
and
Section 24.7.3.6, “aa-enforce—Entering Enforce Mode”.
Test your profile settings by performing every task you need with the application you confined. Normally, the confined program runs smoothly and you do not notice AppArmor activities. However, if you notice certain misbehavior with your application, check the system logs and see if AppArmor is too tightly confining your application. Depending on the log mechanism used on your system, there are several places to look for AppArmor log entries:
/var/log/audit/audit.log
|
The command journalctl | grep -i apparmor
|
The command dmesg -T
|
To adjust the profile, analyze the log messages relating to this application again as described in Section 24.7.3.9, “aa-logprof—Scanning the System Log”. Determine the access rights or restrictions when prompted.
For more information about profile building and modification, refer to Chapter 21, Profile Components and Syntax, Chapter 23, Building and Managing Profiles with YaST, and Chapter 24, Building Profiles from the Command Line.
Software and system configurations change over time. As a result, your
profile setup for AppArmor might need some fine-tuning from time to time.
AppArmor checks your system log for policy violations or other AppArmor
events and lets you adjust your profile set accordingly. Any application
behavior that is outside of any profile definition can be addressed by
aa-logprof. For more information, see
Section 24.7.3.9, “aa-logprof—Scanning the System Log”.
Effective hardening of a computer system requires minimizing the number of programs that mediate privilege, then securing the programs as much as possible. With AppArmor, you only need to profile the programs that are exposed to attack in your environment, which drastically reduces the amount of work required to harden your computer. AppArmor profiles enforce policies to make sure that programs do what they are supposed to do, but nothing else.
AppArmor provides immunization technologies that protect applications from the inherent vulnerabilities they possess. After installing AppArmor, setting up AppArmor profiles, and rebooting the computer, your system becomes immunized because it begins to enforce the AppArmor security policies. Protecting programs with AppArmor is called immunizing.
Administrators need only concern themselves with the applications that are vulnerable to attacks, and generate profiles for these. Hardening a system thus comes down to building and maintaining the AppArmor profile set and monitoring any policy violations or exceptions logged by AppArmor's reporting facility.
Users should not notice AppArmor. It runs “behind the scenes” and does not require any user interaction. Performance is not noticeably affected by AppArmor. If some activity of the application is not covered by an AppArmor profile or if some activity of the application is prevented by AppArmor, the administrator needs to adjust the profile of this application.
AppArmor sets up a collection of default application profiles to protect standard Linux services. To protect other applications, use the AppArmor tools to create profiles for the applications that you want protected. This chapter introduces the philosophy of immunizing programs. Proceed to Chapter 21, Profile Components and Syntax, Chapter 23, Building and Managing Profiles with YaST, or Chapter 24, Building Profiles from the Command Line if you are ready to build and manage AppArmor profiles.
AppArmor provides streamlined access control for network services by specifying which files each program is allowed to read, write, and execute, and which type of network it is allowed to access. This ensures that each program does what it is supposed to do, and nothing else. AppArmor quarantines programs to protect the rest of the system from being damaged by a compromised process.
AppArmor is a host intrusion prevention or mandatory access control scheme. Previously, access control schemes were centered around users because they were built for large timeshare systems. Alternatively, modern network servers largely do not permit users to log in, but instead provide a variety of network services for users (such as Web, mail, file, and print servers). AppArmor controls the access given to network services and other programs to prevent weaknesses from being exploited.
To get a more in-depth overview of AppArmor and the overall concept behind it, refer to Section 18.2, “Background Information on AppArmor Profiling”.
This section provides a very basic understanding of what is happening “behind the scenes” (and under the hood of the YaST interface) when you run AppArmor.
An AppArmor profile is a plain text file containing path entries and access permissions. See Section 21.1, “Breaking an AppArmor Profile into Its Parts” for a detailed reference profile. The directives contained in this text file are then enforced by the AppArmor routines to quarantine the process or program.
The following tools interact in the building and enforcement of AppArmor profiles and policies:
aa-status
aa-status reports various aspects of the current
state of the running AppArmor confinement.
aa-unconfined
aa-unconfined detects any application running on
your system that listens for network connections and is not protected
by an AppArmor profile. Refer to
Section 24.7.3.12, “aa-unconfined—Identifying Unprotected Processes”
for detailed information about this tool.
aa-autodep
aa-autodep creates a basic framework of a profile
that needs to be fleshed out before it is put to use in production.
The resulting profile is loaded and put into complain mode, reporting
any behavior of the application that is not (yet) covered by AppArmor
rules. Refer to
Section 24.7.3.1, “aa-autodep—Creating Approximate Profiles”
for detailed information about this tool.
aa-genprof
aa-genprof generates a basic profile and asks you
to refine this profile by executing the application and generating log
events that need to be taken care of by AppArmor policies. You are
guided through a series of questions to deal with the log events that
have been triggered during the application's execution. After the
profile has been generated, it is loaded and put into enforce mode.
Refer to
Section 24.7.3.8, “aa-genprof—Generating Profiles”
for detailed information about this tool.
aa-logprof
aa-logprof interactively scans and reviews the log
entries generated by an application that is confined by an AppArmor
profile in both complain and enforced modes. It assists you in
generating new entries in the profile concerned. Refer to
Section 24.7.3.9, “aa-logprof—Scanning the System Log”
for detailed information about this tool.
aa-easyprof
aa-easyprof provides an easy-to-use interface for
AppArmor profile generation. aa-easyprof supports
the use of templates and policy groups to quickly profile an
application. Note that while this tool can help with policy
generation, its utility is dependent on the quality of the templates,
policy groups and abstractions used. aa-easyprof
may create a profile that is less restricted than creating the profile
with aa-genprof and aa-logprof.
aa-complain
aa-complain toggles the mode of an AppArmor profile
from enforce to complain. Violations to rules set in a profile are
logged, but the profile is not enforced. Refer to
Section 24.7.3.2, “aa-complain—Entering Complain or Learning Mode”
for detailed information about this tool.
aa-enforce
aa-enforce toggles the mode of an AppArmor profile
from complain to enforce. Violations to rules set in a profile are
logged and not permitted—the profile is enforced. Refer to
Section 24.7.3.6, “aa-enforce—Entering Enforce Mode”
for detailed information about this tool.
aa-disable
aa-disable disables the enforcement mode for one or
more AppArmor profiles. This command will unload the profile from the
kernel and prevent it from being loaded on AppArmor start-up. The
aa-enforce and aa-complain
utilities may be used to change this behavior.
aa-exec
aa-exec launches a program confined by the
specified AppArmor profile and/or namespace. If both a profile and
namespace are specified, the command will be confined by the profile
in the new policy namespace. If only a namespace is specified, the
profile name of the current confinement will be used. If neither a
profile or namespace is specified, the command will be run using
standard profile attachment—as if run without
aa-exec.
aa-notify
aa-notify is a handy utility that displays AppArmor
notifications in your desktop environment. You can also configure it
to display a summary of notifications for the specified number of
recent days. For more information, see
Section 24.7.3.13, “aa-notify”.
Now that you have familiarized yourself with AppArmor, start selecting the applications for which to build profiles. Programs that need profiling are those that mediate privilege. The following programs have access to resources that the person using the program does not have, so they grant the privilege to the user when used:
cron Jobs
Programs that are run periodically by
cron. Such programs read input
from a variety of sources and can run with special privileges,
sometimes with as much as root privilege. For example,
cron can run
/usr/sbin/logrotate daily to rotate, compress, or
even mail system logs. For instructions for finding these types of
programs, refer to
Section 20.3, “Immunizing cron Jobs”.
Programs that can be invoked through a Web browser, including CGI Perl scripts, PHP pages, and more complex Web applications. For instructions for finding these types of programs, refer to Section 20.4.1, “Immunizing Web Applications”.
Programs (servers and clients) that have open network ports. User clients, such as mail clients and Web browsers mediate privilege. These programs run with the privilege to write to the user's home directory and they process input from potentially hostile remote sources, such as hostile Web sites and e-mailed malicious code. For instructions for finding these types of programs, refer to Section 20.4.2, “Immunizing Network Agents”.
Conversely, unprivileged programs do not need to be profiled. For
example, a shell script might invoke the cp
program to copy a file. Because cp does not by
default have its own profile or subprofile, it inherits the profile
of the parent shell script. Thus cp can copy any
files that the parent shell script's profile can read and write.
cron Jobs #
To find programs that are run by
cron, inspect your local
cron configuration.
Unfortunately, cron configuration
is rather complex, so there are numerous files to inspect. Periodic
cron jobs are run from these
files:
/etc/crontab /etc/cron.d/* /etc/cron.daily/* /etc/cron.hourly/* /etc/cron.monthly/* /etc/cron.weekly/*
The crontab command lists/edits the current user's
crontab. To manipulate root's
cron jobs, first become
root, and then edit the tasks with crontab -e
or list them with crontab -l.
An automated method for finding network server daemons that should be
profiled is to use the aa-unconfined tool.
The aa-unconfined tool uses the command
netstat -nlp to inspect open ports from inside your
computer, detect the programs associated with those ports, and inspect
the set of AppArmor profiles that you have loaded.
aa-unconfined then reports these programs along with
the AppArmor profile associated with each program, or reports
“none” (if the program is not confined).
If you create a new profile, you must restart the program that has been profiled to have it be effectively confined by AppArmor.
Below is a sample aa-unconfined output:
37021 /usr/sbin/sshd2 confined by '/usr/sbin/sshd3 (enforce)' 4040 /usr/sbin/smbd confined by '/usr/sbin/smbd (enforce)' 4373 /usr/lib/postfix/master confined by '/usr/lib/postfix/master (enforce)' 4505 /usr/sbin/httpd2-prefork confined by '/usr/sbin/httpd2-prefork (enforce)' 646 /usr/lib/wicked/bin/wickedd-dhcp4 not confined 647 /usr/lib/wicked/bin/wickedd-dhcp6 not confined 5592 /usr/bin/ssh not confined 7146 /usr/sbin/cupsd confined by '/usr/sbin/cupsd (complain)'
The first portion is a number. This number is the process ID number (PID) of the listening program. | |
The second portion is a string that represents the absolute path of the listening program | |
The final portion indicates the profile confining the program, if any. |
aa-unconfined requires root privileges and
should not be run from a shell that is confined by an AppArmor profile.
aa-unconfined does not distinguish between one network
interface and another, so it reports all unconfined processes, even those
that might be listening to an internal LAN interface.
Finding user network client applications is dependent on your user
preferences. The aa-unconfined tool detects and
reports network ports opened by client applications, but only those
client applications that are running at the time the
aa-unconfined analysis is performed. This is a problem
because network services tend to be running all the time, while network
client applications tend only to be running when the user is interested
in them.
Applying AppArmor profiles to user network client applications is also dependent on user preferences. Therefore, we leave the profiling of user network client applications as an exercise for the user.
To aggressively confine desktop applications, the
aa-unconfined command supports a
--paranoid option, which reports all processes running
and the corresponding AppArmor profiles that might or might not be
associated with each process. The user can then decide whether each of
these programs needs an AppArmor profile.
If you have new or modified profiles, you can submit them to the <apparmor@lists.ubuntu.com> mailing list along with a use case for the application behavior that you exercised. The AppArmor team reviews and may submit the work into openSUSE Leap. We cannot guarantee that every profile will be included, but we make a sincere effort to include as much as possible.
To find Web applications, investigate your Web server configuration. The
Apache Web server is highly configurable and Web applications can be
stored in many directories, depending on your local configuration.
openSUSE Leap, by default, stores Web applications in
/srv/www/cgi-bin/. To the maximum extent possible,
each Web application should have an AppArmor profile.
Once you find these programs, you can use the
aa-genprof and aa-logprof tools to
create or update their AppArmor profiles.
Because CGI programs are executed by the Apache Web server, the profile
for Apache itself, usr.sbin.httpd2-prefork for
Apache2 on openSUSE Leap, must be modified to add execute permissions
to each of these programs. For example, adding the line
/srv/www/cgi-bin/my_hit_counter.pl rPx grants Apache
permission to execute the Perl script
my_hit_counter.pl and requires that there be a
dedicated profile for my_hit_counter.pl. If
my_hit_counter.pl does not have a dedicated profile
associated with it, the rule should say
/srv/www/cgi-bin/my_hit_counter.pl rix to cause
my_hit_counter.pl to inherit the
usr.sbin.httpd2-prefork profile.
Some users might find it inconvenient to specify execute permission for
every CGI script that Apache might invoke. Instead, the administrator
can grant controlled access to collections of CGI scripts. For example,
adding the line /srv/www/cgi-bin/*.{pl,py,pyc} rix
allows Apache to execute all files in
/srv/www/cgi-bin/ ending in .pl
(Perl scripts) and .py or .pyc
(Python scripts). As above, the ix part of the rule
causes Python scripts to inherit the Apache profile, which is
appropriate if you do not want to write individual profiles for each CGI
script.
If you want the subprocess confinement module
(apache2-mod-apparmor) functionality when Web
applications handle Apache modules (mod_perl and
mod_php), use the ChangeHat features when you add
a profile in YaST or at the command line. To take advantage of the
subprocess confinement, refer to
Section 25.2, “Managing ChangeHat-Aware Applications”.
Profiling Web applications that use mod_perl and
mod_php requires slightly different handling. In
this case, the “program” is a script interpreted directly
by the module within the Apache process, so no exec happens. Instead,
the AppArmor version of Apache calls change_hat()
using a subprofile (a “hat”) corresponding to the name of
the URI requested.
The name presented for the script to execute might not be the URI, depending on how Apache has been configured for where to look for module scripts. If you have configured your Apache to place scripts in a different place, the different names appear in the log file when AppArmor complains about access violations. See Chapter 27, Managing Profiled Applications.
For mod_perl and mod_php
scripts, this is the name of the Perl script or the PHP page requested.
For example, adding this subprofile allows the
localtime.php page to execute and access to the
local system time and locale files:
/usr/bin/httpd2-prefork {
# ...
^/cgi-bin/localtime.php {
/etc/localtime r,
/srv/www/cgi-bin/localtime.php r,
/usr/lib/locale/** r,
}
}
If no subprofile has been defined, the AppArmor version of Apache applies
the DEFAULT_URI hat. This subprofile is
sufficient to display a Web page. The
DEFAULT_URI hat that AppArmor provides by
default is the following:
^DEFAULT_URI {
/usr/sbin/suexec2 mixr,
/var/log/apache2/** rwl,
@{HOME}/public_html r,
@{HOME}/public_html/** r,
/srv/www/htdocs r,
/srv/www/htdocs/** r,
/srv/www/icons/*.{gif,jpg,png} r,
/srv/www/vhosts r,
/srv/www/vhosts/** r,
/usr/share/apache2/** r,
/var/lib/php/sess_* rwl
}
To use a single AppArmor profile for all Web pages and CGI scripts served
by Apache, a good approach is to edit the
DEFAULT_URI subprofile. For more information on
confining Web applications with Apache, see
Chapter 25, Profiling Your Web Applications Using ChangeHat.
To find network server daemons and network clients (such as
fetchmail or Firefox) that need to be profiled,
you should inspect the open ports on your machine. Also consider
the programs that are answering on those ports, and provide profiles
for as many of those programs as possible. If you provide profiles
for all programs with open network ports, an attacker cannot get to
the file system on your machine without passing through an AppArmor
profile policy.
Scan your server for open network ports manually from outside the
machine using a scanner (such as nmap), or from inside the machine using
the netstat --inet -n -p command as root.
Then, inspect the machine to determine which programs are answering on
the discovered open ports.
Refer to the man page of the netstat command for a
detailed reference of all possible options.
Building AppArmor profiles to confine an application is very straightforward and intuitive. AppArmor ships with several tools that assist in profile creation. It does not require you to do any programming or script handling. The only task that is required of the administrator is to determine a policy of strictest access and execute permissions for each application that needs to be hardened.
Updates or modifications to the application profiles are only required if the software configuration or the desired range of activities changes. AppArmor offers intuitive tools to handle profile updates and modifications.
You are ready to build AppArmor profiles after you select the programs to profile. To do so, it is important to understand the components and syntax of profiles. AppArmor profiles contain several building blocks that help build simple and reusable profile code:
Include statements are used to pull in parts of other AppArmor profiles to simplify the structure of new profiles.
Abstractions are include statements grouped by common application tasks.
Program chunks are include statements that contain chunks of profiles that are specific to program suites.
Capability entries are profile entries for any of the POSIX.1e http://en.wikipedia.org/wiki/POSIX#POSIX.1 Linux capabilities allowing a fine-grained control over what a confined process is allowed to do through system calls that require privileges.
Network Access Control Entries mediate network access based on the address type and family.
Local variables define shortcuts for paths.
File Access Control Entries specify the set of files an application can access.
rlimit entries set and control an application's resource limits.
For help determining the programs to profile, refer to Section 20.2, “Determining Programs to Immunize”. To start building AppArmor profiles with YaST, proceed to Chapter 23, Building and Managing Profiles with YaST. To build profiles using the AppArmor command line interface, proceed to Chapter 24, Building Profiles from the Command Line.
The easiest way of explaining what a profile consists of and how to
create one is to show the details of a sample profile, in this case for a
hypothetical application called /usr/bin/foo:
#include <tunables/global>1 # a comment naming the application to confine /usr/bin/foo2 {3 #include <abstractions/base>4 capability setgid5, network inet tcp6, link /etc/sysconfig/foo -> /etc/foo.conf,7 /bin/mount ux, /dev/{,u}8random r, /etc/ld.so.cache r, /etc/foo/* r, /lib/ld-*.so* mr, /lib/lib*.so* mr, /proc/[0-9]** r, /usr/lib/** mr, /tmp/ r,9 /tmp/foo.pid wr, /tmp/foo.* lrw, /@{HOME}10/.foo_file rw, /@{HOME}/.foo_lock kw, owner11 /shared/foo/** rw, /usr/bin/foobar Cx,12 /bin/** Px -> bin_generic,13 # a comment about foo's local (children) profile for /usr/bin/foobar. profile /usr/bin/foobar14 { /bin/bash rmix, /bin/cat rmix, /bin/more rmix, /var/log/foobar* rwl, /etc/foobar r, } # foo's hat, bar. ^bar15 { /lib/ld-*.so* mr, /usr/bin/bar px, /var/spool/* rwl, } }
This loads a file containing variable definitions. | |
The normalized path to the program that is confined. | |
The curly braces ( | |
This directive pulls in components of AppArmor profiles to simplify profiles. | |
Capability entry statements enable each of the 29 POSIX.1e draft capabilities. | |
A directive determining the kind of network access allowed to the application. For details, refer to Section 21.5, “Network Access Control”. | |
A link pair rule specifying the source and the target of a link. See Section 21.7.6, “Link Pair” for more information. | |
The curly braces ( | |
A path entry specifying what areas of the file system the program can
access. The first part of a path entry specifies the absolute path of a
file (including regular expression globbing) and the second part
indicates permissible access modes (for example | |
This variable expands to a value that can be changed without changing the entire profile. | |
An owner conditional rule, granting read and write permission on files owned by the user. Refer to Section 21.7.8, “Owner Conditional Rules” for more information. | |
This entry defines a transition to the local profile
| |
A named profile transition to the profile bin_generic located in the global scope. See Section 21.8.7, “Named Profile Transitions” for details. | |
The local profile | |
This section references a “hat” subprofile of the application. For more details on AppArmor's ChangeHat feature, refer to Chapter 25, Profiling Your Web Applications Using ChangeHat. |
When a profile is created for a program, the program can access only the files, modes, and POSIX capabilities specified in the profile. These restrictions are in addition to the native Linux access controls.
Example:
To gain the capability CAP_CHOWN, the
program must have both access to CAP_CHOWN
under conventional Linux access controls (typically, be a
root-owned process) and have the capability
chown in its profile. Similarly, to be able
to write to the file /foo/bar the program must
have both the correct user ID and mode bits set in the files
attributes and have /foo/bar w in its profile.
Attempts to violate AppArmor rules are recorded in
/var/log/audit/audit.log if the
audit package is installed, or
in /var/log/messages, or only in
journalctl if no traditional syslog is
installed. Often AppArmor rules prevent an attack from working
because necessary files are not accessible and, in all cases, AppArmor
confinement restricts the damage that the attacker can do to the set of
files permitted by AppArmor.
AppArmor knows four different types of profiles: standard profiles,
unattached profiles, local profiles and hats. Standard and unattached
profiles are stand-alone profiles, each stored in a file under
/etc/apparmor.d/. Local profiles and hats are
children profiles embedded inside of a parent profile used to provide
tighter or alternate confinement for a subtask of an application.
The default AppArmor profile is attached to a program by its name, so a profile name must match the path to the application it is to confine.
/usr/bin/foo {
...
}
This profile will be automatically used whenever an unconfined process
executes /usr/bin/foo.
Unattached profiles do not reside in the file system namespace and
therefore are not automatically attached to an application. The name of
an unattached profile is preceded by the keyword
profile. You can freely choose a profile name, except
for the following limitations: the name must not begin with a
: or . character. If it contains a
whitespace, it must be quoted. If the name begins with a
/, the profile is considered to be a standard
profile, so the following two profiles are identical:
profile /usr/bin/foo {
...
}
/usr/bin/foo {
...
}
Unattached profiles are never used automatically, nor can they be
transitioned to through a Px rule. They need to be
attached to a program by either using a named profile transition (see
Section 21.8.7, “Named Profile Transitions”) or with the
change_profile rule (see
Section 21.2.5, “Change rules”).
Unattached profiles are useful for specialized profiles for system
utilities that generally should not be confined by a system-wide profile
(for example, /bin/bash). They can also be used to
set up roles or to confine a user.
Local profiles provide a convenient way to provide specialized
confinement for utility programs launched by a confined application.
They are specified like standard profiles, except that they are embedded
in a parent profile and begin with the profile
keyword:
/parent/profile {
...
profile /local/profile {
...
}
}
To transition to a local profile, either use a cx
rule (see Section 21.8.2, “Discrete Local Profile Execute Mode (Cx)”) or a named
profile transition (see
Section 21.8.7, “Named Profile Transitions”).
AppArmor "hats" are a local profiles with some additional restrictions
and an implicit rule allowing for change_hat to be
used to transition to them. Refer to Chapter 25, Profiling Your Web Applications Using ChangeHat
for a detailed description.
AppArmor provides change_hat and
change_profile rules that control domain
transitioning. change_hat are specified by defining
hats in a profile, while change_profile rules refer
to another profile and start with the keyword
change_profile:
change_profile -> /usr/bin/foobar,
Both change_hat and change_profile
provide for an application directed profile transition, without having
to launch a separate application. change_profile
provides a generic one way transition between any of the loaded
profiles. change_hat provides for a returnable parent
child transition where an application can switch from the parent profile
to the hat profile and if it provides the correct secret key return to
the parent profile at a later time.
change_profile is best used in situations where an
application goes through a trusted setup phase and then can lower its
privilege level. Any resources mapped or opened during the start-up
phase may still be accessible after the profile change, but the new
profile will restrict the opening of new resources, and will even limit
some resources opened before the switch. Specifically, memory
resources will still be available while capability and file resources
(as long as they are not memory mapped) can be limited.
change_hat is best used in situations where an
application runs a virtual machine or an interpreter that does not
provide direct access to the applications resources (for example
Apache's mod_php). Since
change_hat stores the return secret key in the
application's memory the phase of reduced privilege should not have
direct access to memory. It is also important that file access is
properly separated, since the hat can restrict accesses to a file handle
but does not close it. If an application does buffering and provides
access to the open files with buffering, the accesses to these files
might not be seen by the kernel and hence not restricted by the new
profile.
The change_hat and change_profile
domain transitions are less secure than a domain transition done
through an exec because they do not affect a process's memory mappings,
nor do they close resources that have already been opened.
Include statements are directives that pull in components of other AppArmor profiles to simplify profiles. Include files retrieve access permissions for programs. By using an include, you can give the program access to directory paths or files that are also required by other programs. Using includes can reduce the size of a profile.
Include statements normally begin with a hash (#)
sign. This is confusing because the same hash sign is used for comments
inside profile files. Because of this, #include is
treated as an include only if there is no preceding #
(##include is a comment) and there is no whitespace
between # and include (#
include is a comment).
You can also use include without the leading
#.
include "/etc/apparmor.d/abstractions/foo"
is the same as using
#include "/etc/apparmor.d/abstractions/foo"
Note that because includes follow the C pre-processor syntax, they do not have a trailing ',' like most AppArmor rules.
By slight changes in syntax, you can modify the behavior of
include. If you use "" around the
including path, you instruct the parser to do an absolute or relative
path lookup.
include "/etc/apparmor.d/abstractions/foo" # absolute path include "abstractions/foo" # relative path to the directory of current file
Note that when using relative path includes, when the file is included,
it is considered the new current file for its includes. For example,
suppose you are in the /etc/apparmor.d/bar file,
then
include "abstractions/foo"
includes the file /etc/apparmor.d/abstractions/foo.
If then there is
include "example"
inside the /etc/apparmor.d/abstractions/foo file, it
includes /etc/apparmor.d/abstractions/example.
The use of <> specifies to try the include
path (specified by -I, defaults to the
/etc/apparmor.d directory) in an ordered way. So
assuming the include path is
-I /etc/apparmor.d/ -I /usr/share/apparmor/
then the include statement
include <abstractions/foo>
will try /etc/apparmor.d/abstractions/foo, and if
that file does not exist, the next try is
/usr/share/apparmor/abstractions/foo.
The default include path can be overridden manually by passing
-I to the apparmor_parser, or by
setting the include paths in
/etc/apparmor/parser.conf:
Include /usr/share/apparmor/ Include /etc/apparmor.d/
Multiple entries are allowed, and they are taken in the same order as
when they are when using -I or
--Include from the apparmor_parser
command line.
If an include ends with '/', this is considered a directory include, and all files within the directory are included.
To assist you in profiling your applications, AppArmor provides three classes of includes: abstractions, program chunks and tunables.
Abstractions are includes that are grouped by common application tasks.
These tasks include access to authentication mechanisms, access to name
service routines, common graphics requirements, and system accounting.
Files listed in these abstractions are specific to the named task.
Programs that require one of these files usually also require
other files listed in the abstraction file (depending on the local
configuration and the specific requirements of the program). Find
abstractions in /etc/apparmor.d/abstractions.
The program-chunks directory
(/etc/apparmor.d/program-chunks) contains some
chunks of profiles that are specific to program suites and not generally
useful outside of the suite, thus are never suggested for use in
profiles by the profile wizards (aa-logprof and
aa-genprof). Currently, program chunks are only
available for the postfix program suite.
The tunables directory (/etc/apparmor.d/tunables)
contains global variable definitions. When used in a profile, these
variables expand to a value that can be changed without changing the
entire profile. Add all the tunables definitions that should be
available to every profile to
/etc/apparmor.d/tunables/global.
Capability rules are simply the word capability
followed by the name of the POSIX.1e capability as defined in the
capabilities(7) man page. You can list multiple
capabilities in a single rule, or grant all implemented capabilities with
the bare keyword capability.
capability dac_override sys_admin, # multiple capabilities capability, # grant all capabilities
AppArmor allows mediation of network access based on the address type and family. The following illustrates the network access rule syntax:
network [[<domain>1][<type2>][<protocol3>]]
Supported domains: | |
Supported types: | |
Supported protocols: |
The AppArmor tools support only family and type specification. The AppArmor
module emits only network DOMAIN
TYPE in “ACCESS DENIED”
messages. And only these are output by the profile generation tools, both
YaST and command line.
The following examples illustrate possible network-related rules to be used in AppArmor profiles. Note that the syntax of the last two are not currently supported by the AppArmor tools.
network1, network inet2, network inet63, network inet stream4, network inet tcp5, network tcp6,
Allow all networking. No restrictions applied with regard to domain, type, or protocol. | |
Allow general use of IPv4 networking. | |
Allow general use of IPv6 networking. | |
Allow the use of IPv4 TCP networking. | |
Allow the use of IPv4 TCP networking, paraphrasing the rule above. | |
Allow the use of both IPv4 and IPv6 TCP networking. |
A profile is usually attached to a program by specifying a full path to the program's executable. For example in the case of a standard profile (see Section 21.2.1, “Standard Profiles”), the profile is defined by
/usr/bin/foo { ... }The following sections describe several useful techniques that can be applied when naming a profile or putting a profile in the context of other existing ones, or specifying file paths.
AppArmor explicitly distinguishes directory path names from file path
names. Use a trailing / for any directory path that
needs to be explicitly distinguished:
/some/random/example/* r
Allow read access to files in the
/some/random/example directory.
/some/random/example/ r
Allow read access to the directory only.
/some/**/ r
Give read access to any directories below /some
(but not /some/ itself).
/some/random/example/** r
Give read access to files and directories under
/some/random/example (but not
/some/random/example/ itself).
/some/random/example/**[^/] r
Give read access to files under
/some/random/example. Explicitly exclude
directories ([^/]).
Globbing (or regular expression matching) is when you modify the directory path using wild cards to include a group of files or subdirectories. File resources can be specified with a globbing syntax similar to that used by popular shells, such as csh, Bash, and zsh.
|
|
Substitutes for any number of any characters, except
Example: An arbitrary number of file path elements. |
|
|
Substitutes for any number of characters, including
Example: An arbitrary number of path elements, including entire directories. |
|
|
Substitutes for any single character, except |
|
|
Substitutes for the single character
Example: a rule that matches |
|
|
Substitutes for the single character |
|
|
Expands to one rule to match
Example: a rule that matches |
|
|
Substitutes for any character except |
Profile flags control the behavior of the related profile. You can add profile flags to the profile definition by editing it manually, see the following syntax:
/path/to/profiled/binary flags=(list_of_flags) {
[...]
}You can use multiple flags separated by a comma ',' or space ' '. There are three basic types of profile flags: mode, relative, and attach flags.
Mode flag is complain (illegal
accesses are allowed and logged). If it is omitted, the profile is in
enforce mode (enforces the policy).
A more flexible way of setting the whole profile into complain mode is
to create a symbolic link from the profile file inside the
/etc/apparmor.d/force-complain/ directory.
ln -s /etc/apparmor.d/bin.ping /etc/apparmor.d/force-complain/bin.ping
Relative flags are
chroot_relative (states that the profile is relative
to the chroot instead of namespace) or
namespace_relative (the default, with the path being
relative to outside the chroot). They are mutually exclusive.
Attach flags consist of two pairs of mutually
exclusive flags: attach_disconnected or
no_attach_disconnected (determine if path names
resolved to be outside of the namespace are attached to the root, which
means they have the '/' character at the beginning), and
chroot_attach or chroot_no_attach
(control path name generation when in a chroot environment while a file
is accessed that is external to the chroot but within the namespace).
AppArmor allows to use variables holding paths in profiles. Use global variables to make your profiles portable and local variables to create shortcuts for paths.
A typical example of when global variables come in handy are network
scenarios in which user home directories are mounted in different
locations. Instead of rewriting paths to home directories in all
affected profiles, you only need to change the value of a variable.
Global variables are defined under
/etc/apparmor.d/tunables and need to be made
available via an include statement. Find the variable definitions for
this use case (@{HOME} and @{HOMEDIRS}) in
the /etc/apparmor.d/tunables/home file.
Local variables are defined at the head of a profile. This is useful to provide the base of for a chrooted path, for example:
@{CHROOT_BASE}=/tmp/foo
/sbin/rsyslogd {
...
# chrooted applications
@{CHROOT_BASE}/var/lib/*/dev/log w,
@{CHROOT_BASE}/var/log/** w,
...
}In the following example, while @{HOMEDIRS} lists where all the user home directories are stored, @{HOME} is a space-separated list of home directories. Later on, @{HOMEDIRS} is expanded by two new specific places where user home directories are stored.
@{HOMEDIRS}=/home/
@{HOME}=@{HOMEDIRS}/*/ /root/
[...]
@{HOMEDIRS}+=/srv/nfs/home/ /mnt/home/With the current AppArmor tools, variables can only be used when manually editing and maintaining a profile.
Profile names can contain globbing expressions allowing the profile to match against multiple binaries.
The following example is valid for systems where the
foo binary resides either in
/usr/bin or /bin.
/{usr/,}bin/foo { ... }
In the following example, when matching against the executable
/bin/foo, the /bin/foo profile
is an exact match so it is chosen. For the executable
/bin/fat, the profile /bin/foo
does not match, and because the /bin/f* profile is
more specific (less general) than /bin/**, the
/bin/f* profile is chosen.
/bin/foo { ... }
/bin/f* { ... }
/bin/** { ... }
For more information on profile name globbing examples, see the man page
of AppArmor, man 5 apparmor.d,, section
Globbing.
Namespaces are used to provide different profiles sets. Say one for the
system, another for a chroot environment or container. Namespaces are
hierarchical—a namespace can see its children but a child
cannot see its parent. Namespace names start with a colon
: followed by an alphanumeric string, a trailing
colon : and an optional double slash
//, such as
:childNameSpace://
Profiles loaded to a child namespace will be prefixed with their namespace name (viewed from a parent's perspective):
:childNameSpace://apache
Namespaces can be entered via the change_profile API,
or named profile transitions:
/path/to/executable px -> :childNameSpace://apache
Profiles can have a name, and an attachment specification. This allows for profiles with a logical name that can be more meaningful to users/administrators than a profile name that contains pattern matching (see Section 21.6.3, “Pattern Matching”). For example, the default profile
/** { ... }can be named
profile default /** { ... }Also, a profile with pattern matching can be named. For example:
/usr/lib/firefox-3.*/firefox-*bin { ... }can be named
profile firefox /usr/lib/firefox-3.*/firefox-*bin { ... }Alias rules provide an alternative way to manipulate profile path mappings to site specific layouts. They are an alternative form of path rewriting to using variables, and are done post variable resolution. The alias rule says to treat rules that have the same source prefix as if the rules are at target prefix.
alias /home/ -> /usr/home/
All the rules that have a prefix match to /home/
will provide access to /usr/home/. For example
/home/username/** r,
allows as well access to
/usr/home/username/** r,
Aliases provide a quick way of remapping rules without the need to
rewrite them. They keep the source path still accessible—in our
example, the alias rule keeps the paths under
/home/ still accessible.
With the alias rule, you can point to multiple
targets at the same time.
alias /home/ -> /usr/home/ alias /home/ -> /mnt/home/
With the current AppArmor tools, alias rules can only be used when manually editing and maintaining a profile.
Insert global alias definitions in the file
/etc/apparmor.d/tunables/alias.
File permission access modes consist of combinations of the following modes:
|
|
Read mode |
|
|
Write mode (mutually exclusive to |
|
|
Append mode (mutually exclusive to |
|
|
File locking mode |
|
|
Link mode |
|
|
Link pair rule (cannot be combined with other access modes) |
Allows the program to have read access to the resource. Read access is required for shell scripts and other interpreted content and determines if an executing process can core dump.
Allows the program to have write access to the resource. Files must have this permission if they are to be unlinked (removed).
Allows a program to write to the end of a file. In contrast to the
w mode, the append mode does not include the ability
to overwrite data, to rename, or to remove a file. The append permission
is typically used with applications who need to be able to write to log
files, but which should not be able to manipulate any existing data in
the log files. As the append permission is a subset of the permissions
associated with the write mode, the w and
a permission flags cannot be used together and are
mutually exclusive.
The application can take file locks. Former versions of AppArmor allowed files to be locked if an application had access to them. By using a separate file locking mode, AppArmor makes sure locking is restricted only to those files which need file locking and tightens security as locking can be used in several denial of service attack scenarios.
The link mode mediates access to hard links. When a link is created, the target file must have the same access permissions as the link created (but the destination does not need link access).
The link mode grants permission to link to arbitrary files, provided the link has a subset of the permissions granted by the target (subset permission test).
/srv/www/htdocs/index.html rl,
By specifying origin and destination, the link pair rule provides greater control over how hard links are created. Link pair rules by default do not enforce the link subset permission test that the standard rules link permission requires.
link /srv/www/htdocs/index.html -> /var/www/index.html
To force the rule to require the test, the subset
keyword is used. The following rules are equivalent:
/var/www/index.html l, link subset /var/www/index.html -> /**,
Currently link pair rules are not supported by YaST and the command line tools. Manually edit your profiles to use them. Updating such profiles using the tools is safe, because the link pair entries will not be touched.
allow and file Rules #
The allow prefix is optional, and it is idiomatically
implied if not specified and the deny (see
Section 21.7.9, “Deny Rules”) keyword is not used.
allow file /example r, allow /example r, allow network,
You can also use the optional file keyword. If you
omit it and there are no other rule types that start with a keyword,
such as network or mount, it is
automatically implied.
file /example/rule r,
is equivalent to
/example/rule r,
The following rule grants access to all files:
file,
which is equal to
/** rwmlk,
File rules can use leading or trailing permissions. The permissions should not be specified as a trailing permission, but rather used at the start of the rule. This is important in that it makes file rules behave like any other rule types.
/path rw, # old style rw /path, # leading permission file rw /path, # with explicit 'file' keyword allow file rw /path, # optional 'allow' keyword added
The file rules can be extended so that they can be conditional upon
the user being the owner of the file (the fsuid needs to match the
file's uid). For this purpose the owner keyword
is put in front of the rule. Owner conditional rules accumulate like
regular file rules do.
owner /home/*/** rw
When using file ownership conditions with link rules the ownership test is done against the target file so the user must own the file to be able to link to it.
Owner conditional rules are considered a subset of regular file rules. If a regular file rule overlaps with an owner conditional file rule, the rules are merged. Consider the following example.
/foo r, owner /foo rw, # or w,
The rules are merged—it results in r for
everybody, and w for the owner only.
To address everybody but the owner of the file,
use the keyword other.
owner /foo rw, other /foo r,
Deny rules can be used to annotate or quiet known rejects. The
profile generating tools will not ask about a known reject treated
with a deny rule. Such a reject will also not show up in the audit
logs when denied, keeping the log files lean. If this is not
desired, put the keyword audit in front of the
deny entry.
It is also possible to use deny rules in combination with allow rules.
This allows you to specify a broad allow rule, and then subtract a few
known files that should not be allowed. Deny rules can also be combined
with owner rules, to deny files owned by the user. The following example
allows read/write access to everything in a users directory except write
access to the .ssh/ files:
deny /home/*/.ssh/** w, owner /home/*/** rw,
The extensive use of deny rules is generally not encouraged, because it makes it much harder to understand what a profile does. However a judicious use of deny rules can simplify profiles. Therefore the tools only generate profiles denying specific files and will not use globbing in deny rules. Manually edit your profiles to add deny rules using globbing. Updating such profiles using the tools is safe, because the deny entries will not be touched.
Execute modes, also named profile transitions, consist of the following modes:
|
|
Discrete profile execute mode |
|
|
Discrete local profile execute mode |
|
|
Unconfined execute mode |
|
|
Inherit execute mode |
|
|
Allow |
This mode requires that a discrete security profile is defined for a resource executed at an AppArmor domain transition. If there is no profile defined, the access is denied.
Incompatible with Ux, ux,
px, and ix.
As Px, but instead of searching the global profile
set, Cx only searches the local profiles of the
current profile. This profile transition provides a way for an
application to have alternate profiles for helper applications.
Currently, Cx transitions are limited to top level profiles and cannot be used in hats and children profiles. This restriction will be removed in the future.
Incompatible with Ux, ux,
Px, px, cx, and
ix.
Allows the program to execute the resource without any AppArmor profile
applied to the executed resource. This mode is useful when a confined
program needs to be able to perform a privileged operation, such as
rebooting the machine. By placing the privileged section in another
executable and granting unconfined execution rights, it is possible to
bypass the mandatory constraints imposed on all confined processes.
Allowing a root process to go unconfined means it can change AppArmor
policy itself. For more information about what is constrained, see the
apparmor(7) man page.
This mode is incompatible with ux,
px, Px, and ix.
Use the lowercase versions of exec modes—px,
cx, ux—only in very
special cases. They do not scrub the environment of variables such as
LD_PRELOAD. As a result, the calling domain may have an
undue amount of influence over the called resource. Use these modes only
if the child absolutely must be run unconfined and
LD_PRELOAD must be used. Any profile using such modes
provides negligible security. Use at your own risk.
ix prevents the normal AppArmor domain transition on
execve(2) when the profiled program executes the
named program. Instead, the executed resource inherits the current
profile.
This mode is useful when a confined program needs to call another
confined program without gaining the permissions of the target's profile
or losing the permissions of the current profile. There is no version to
scrub the environment because ix executions do not
change privileges.
Incompatible with cx, ux, and
px. Implies m.
This mode allows a file to be mapped into memory using
mmap(2)'s PROT_EXEC flag. This flag
marks the pages executable. It is used on some architectures to provide
non executable data pages, which can complicate exploit attempts.
AppArmor uses this mode to limit which files a well-behaved program (or
all programs on architectures that enforce non executable memory access
controls) may use as libraries, to limit the effect of invalid
-L flags given to ld(1) and
LD_PRELOAD, LD_LIBRARY_PATH, given to
ld.so(8).
By default, the px and cx (and
their clean exec variants, too) transition to a profile whose name
matches the executable name. With named profile transitions, you can
specify a profile to be transitioned to. This is useful if multiple
binaries need to share a single profile, or if they need to use a
different profile than their name would specify. Named profile
transitions can be used with cx,
Cx, px and Px.
Currently there is a limit of twelve named profile transitions per
profile.
Named profile transitions use -> to indicate the
name of the profile that needs to be transitioned to:
/usr/bin/foo
{
/bin/** px -> shared_profile,
...
/usr/*bash cx -> local_profile,
...
profile local_profile
{
...
}
}
When used with globbing, normal transitions provide a “one to
many” relationship—/bin/** px will
transition to /bin/ping,
/bin/cat, etc, depending on the program being run.
Named transitions provide a “many to one” relationship—all programs that match the rule regardless of their name will transition to the specified profile.
Named profile transitions show up in the log as having the mode
Nx. The name of the profile to be changed to is
listed in the name2 field.
The px and cx transitions specify
a hard dependency—if the specified profile does not exist, the
exec will fail. With the inheritance fallback, the execution will
succeed but inherit the current profile. To specify inheritance
fallback, ix is combined with cx,
Cx, px and Px
into the modes cix, Cix,
pix and Pix.
/path Cix -> profile_name,
or
Cix /path -> profile_name,
where -> profile_name is optional.
The same applies if you add the unconfined ux mode,
where the resulting modes are cux,
CUx, pux and
PUx. These modes allow falling back to
“unconfined” when the specified profile is not found.
/path PUx -> profile_name,
or
PUx /path -> profile_name,
where -> profile_name is optional.
The fallback modes can be used with named profile transitions, too.
When choosing one of the Px, Cx or Ux execution modes, take into account that the following environment variables are removed from the environment before the child process inherits it. As a consequence, applications or processes relying on any of these variables do not work anymore if the profile applied to them carries Px, Cx or Ux flags:
GCONV_PATH
GETCONF_DIR
HOSTALIASES
LD_AUDIT
LD_DEBUG
LD_DEBUG_OUTPUT
LD_DYNAMIC_WEAK
LD_LIBRARY_PATH
LD_ORIGIN_PATH
LD_PRELOAD
LD_PROFILE
LD_SHOW_AUXV
LD_USE_LOAD_BIAS
LOCALDOMAIN
LOCPATH
MALLOC_TRACE
NLSPATH
RESOLV_HOST_CONF
RES_OPTIONS
TMPDIR
TZDIR
safe and unsafe Keywords #
You can use the safe and unsafe
keywords for rules instead of using the case modifier of execution
modes. For example
/example_rule Px,
is the same as any of the following
safe /example_rule px, safe /example_rule Px, safe px /example_rule, safe Px /example_rule,
and the rule
/example_rule px,
is the same as any of
unsafe /example_rule px, unsafe /example_rule Px, unsafe px /example_rule, unsafe Px /example_rule,
The safe/unsafe keywords are
mutually exclusive and can be used in a file rule after the
owner keyword, so the order of rule keywords is
[audit] [deny] [owner] [safe|unsafe] file_rule
AppArmor can set and control an application's resource
limits (rlimits, also known as ulimits). By default, AppArmor does not
control application's rlimits, and it will only control those limits
specified in the confining profile. For more information about resource
limits, refer to the setrlimit(2),
ulimit(1), or ulimit(3)
man pages.
AppArmor leverages the system's rlimits and as such does not provide an additional auditing that would normally occur. It also cannot raise rlimits set by the system, AppArmor rlimits can only reduce an application's current resource limits.
The values will be inherited by the children of a process and will remain even if a new profile is transitioned to or the application becomes unconfined. So when an application transitions to a new profile, that profile can further reduce the application's rlimits.
AppArmor's rlimit rules will also provide mediation of setting an application's hard limits, should it try to raise them. The application cannot raise its hard limits any further than specified in the profile. The mediation of raising hard limits is not inherited as the set value is, so that when the application transitions to a new profile it is free to raise its limits as specified in the profile.
AppArmor's rlimit control does not affect an application's soft limits beyond ensuring that they are less than or equal to the application's hard limits.
AppArmor's hard limit rules have the general form of:
set rlimit RESOURCE <= value,
where RESOURCE and VALUE are to be replaced with the following values:
cpu
CPU time limit in seconds.
fsize, data, stack,
core, rss, as,
memlock, msgqueue
a number in bytes, or a number with a suffix where the suffix can be K/KB (kilobytes), M/MB (megabytes), G/GB (gigabytes), for example
rlimit data <= 100M,
fsize, nofile, locks,
sigpending, nproc*,
rtprio
a number greater or equal to 0
nice
a value between -20 and 19
*The nproc rlimit is handled different than all the other rlimits. Instead of indicating the standard process rlimit it controls the maximum number of processes that can be running under the profile at any time. When the limit is exceeded the creation of new processes under the profile will fail until the number of currently running processes is reduced.
Currently the tools cannot be used to add rlimit rules to profiles. The only way to add rlimit controls to a profile is to manually edit the profile with a text editor. The tools will still work with profiles containing rlimit rules and will not remove them, so it is safe to use the tools to update profiles containing them.
AppArmor provides the ability to audit given rules so that when they are
matched an audit message will appear in the audit log. To enable audit
messages for a given rule, the audit keyword is
put in front of the rule:
audit /etc/foo/* rw,
If it is desirable to audit only a given permission the rule can be split into two rules. The following example will result in audit messages when files are opened for writing, but not when they are opened for reading:
audit /etc/foo/* w, /etc/foo/* r,
Audit messages are not generated for every read or write of a file but only when a file is opened for reading or writing.
Audit control can be combined with
owner/other conditional file rules
to provide auditing when users access files they own/do not own:
audit owner /home/*/.ssh/** rw, audit other /home/*/.ssh/** r,
AppArmor ships with a set of profiles enabled by default. These are created
by the AppArmor developers, and are stored in
/etc/apparmor.d. In addition to these profiles,
openSUSE Leap ships profiles for individual applications together with
the relevant application. These profiles are not enabled by default, and
reside under another directory than the standard AppArmor profiles,
/etc/apparmor/profiles/extras.
The AppArmor tools (YaST, aa-genprof and
aa-logprof) support the use of a local repository.
Whenever you start to create a new profile from scratch, and there
already is an inactive profile in your local repository, you are asked
whether you want to use the existing inactive one from
/etc/apparmor/profiles/extras and whether you want
to base your efforts on it. If you decide to use this profile, it gets
copied over to the directory of profiles enabled by default
(/etc/apparmor.d) and loaded whenever AppArmor is
started. Any further adjustments will be done to the active profile under
/etc/apparmor.d.
YaST provides a basic way to build profiles and manage AppArmor® profiles. It provides two interfaces: a graphical one and a text-based one. The text-based interface consumes less resources and bandwidth, making it a better choice for remote administration, or for times when a local graphical environment is inconvenient. Although the interfaces have differing appearances, they offer the same functionality in similar ways. Another alternative is to use AppArmor commands, which can control AppArmor from a terminal window or through remote connections. The command line tools are described in Chapter 24, Building Profiles from the Command Line.
Start YaST from the main menu and enter your root password
when prompted for it. Alternatively, start YaST by opening a terminal
window, logging in as root, and entering yast2
for the graphical mode or yast for the text-based mode.
In the section, there is an icon. Click it to launch the AppArmor YaST module.
AppArmor enables you to create an AppArmor profile by manually adding entries into the profile. Select the application for which to create a profile, then add entries.
Start YaST, select , and click in the main window.
Browse your system to find the application for which to create a profile.
When you find the application, select it and click . A basic, empty profile appears in the window.
In , add, edit, or delete AppArmor profile entries by clicking the corresponding buttons and referring to Section 23.2.1, “Adding an Entry”, Section 23.2.2, “Editing an Entry”, or Section 23.2.3, “Deleting an Entry”.
When finished, click .
YaST offers basic manipulation for AppArmor profiles, such
as creating or editing. However, the most straightforward way
to edit an AppArmor
profile is to use a text editor such as vi:
tux >sudovi /etc/apparmor.d/usr.sbin.httpd2-prefork
The vi editor also includes syntax (error)
highlighting and syntax error highlighting, which visually warns you
when the syntax of the edited AppArmor profile is wrong.
AppArmor enables you to edit AppArmor profiles manually by adding, editing, or deleting entries. To edit a profile, proceed as follows:
Start YaST, select , and click in the main window.
From the list of profiled applications, select the profile to edit.
Click . The window displays the profile.
In the window, add, edit, or delete AppArmor profile entries by clicking the corresponding buttons and referring to Section 23.2.1, “Adding an Entry”, Section 23.2.2, “Editing an Entry”, or Section 23.2.3, “Deleting an Entry”.
When you are finished, click .
In the pop-up that appears, click to confirm your changes to the profile and reload the AppArmor profile set.
AppArmor contains a syntax check that notifies you of any syntax errors
in profiles you are trying to process with the YaST AppArmor tools.
If an error occurs, edit the profile manually as root and
reload the profile set with systemctl reload
apparmor.
The button in the lists types of entries you can add to the AppArmor profile.
From the list, select one of the following:
In the pop-up window, specify the absolute path of a file, including the type of access permitted. When finished, click .
You can use globbing if necessary. For globbing information, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”. For file access permission information, refer to Section 21.7, “File Permission Access Modes”.
In the pop-up window, specify the absolute path of a directory, including the type of access permitted. You can use globbing if necessary. When finished, click .
For globbing information, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”. For file access permission information, refer to Section 21.7, “File Permission Access Modes”.
In the pop-up window, select the appropriate network family and the socket type. For more information, refer to Section 21.5, “Network Access Control”.
In the pop-up window, select the appropriate capabilities. These are statements that enable each of the 32 POSIX.1e capabilities. Refer to Section 21.4, “Capability Entries (POSIX.1e)” for more information about capabilities. When finished making your selections, click .
In the pop-up window, browse to the files to use as includes. Includes are directives that pull in components of other AppArmor profiles to simplify profiles. For more information, refer to Section 21.3, “Include Statements”.
In the pop-up window, specify the name of the subprofile (hat) to add to your current profile and click . For more information, refer to Chapter 25, Profiling Your Web Applications Using ChangeHat.
When you select , a pop-up window opens. From here, edit the selected entry.
In the pop-up window, edit the entry you need to modify. You can use globbing if necessary. When finished, click .
For globbing information, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”. For access permission information, refer to Section 21.7, “File Permission Access Modes”.
To delete an entry in a given profile, select . AppArmor removes the selected profile entry.
AppArmor enables you to delete an AppArmor profile manually. Simply select the application for which to delete a profile then delete it as follows:
Start YaST, select , and click in the main window.
Select the profile to delete.
Click .
In the pop-up that opens, click to delete the profile and reload the AppArmor profile set.
You can change the status of AppArmor by enabling or disabling it. Enabling AppArmor protects your system from potential program exploitation. Disabling AppArmor, even if your profiles have been set up, removes protection from your system. To change the status of AppArmor, start YaST, select , and click in the main window.
To change the status of AppArmor, continue as described in Section 23.4.1, “Changing AppArmor Status”. To change the mode of individual profiles, continue as described in Section 23.4.2, “Changing the Mode of Individual Profiles”.
When you change the status of AppArmor, set it to enabled or disabled. When AppArmor is enabled, it is installed, running, and enforcing the AppArmor security policies.
Start YaST, select , and click in the main window.
Enable AppArmor by checking or disable AppArmor by deselecting it.
Click in the window.
You always need to restart running programs to apply the profiles to them.
AppArmor can apply profiles in two different modes. In
complain mode, violations of AppArmor profile rules,
such as the profiled program accessing files not permitted by the
profile, are detected. The violations are permitted, but also logged.
This mode is convenient for developing profiles and is used by the
AppArmor tools for generating profiles. Loading a profile in
enforce mode enforces the policy defined in the
profile, and reports policy violation attempts to
rsyslogd (or
auditd or
journalctl, depending on system
configuration).
The dialog allows you to view and edit the mode of currently loaded AppArmor profiles. This feature is useful for determining the status of your system during profile development. During systemic profiling (see Section 24.7.2, “Systemic Profiling”), you can use this tool to adjust and monitor the scope of the profiles for which you are learning behavior.
To edit an application's profile mode, proceed as follows:
Start YaST, select , and click in the main window.
In the section, select .
Select the profile for which to change the mode.
Select to set this profile to complain mode or to enforce mode.
Apply your settings and leave YaST with .
To change the mode of all profiles, use or .
By default, only active profiles are listed (any profile that has a matching application installed on your system). To set up a profile before installing the respective application, click and select the profile to configure from the list that appears.
AppArmor® provides the user the ability to use a command line interface rather than a graphical interface to manage and configure the system security. Track the status of AppArmor and create, delete, or modify AppArmor profiles using the AppArmor command line tools.
Before starting to manage your profiles using the AppArmor command line tools, check out the general introduction to AppArmor given in Chapter 20, Immunizing Programs and Chapter 21, Profile Components and Syntax.
AppArmor can be in any one of three states:
AppArmor is not activated in the kernel.
AppArmor is activated in the kernel and is enforcing AppArmor program policies.
AppArmor is activated in the kernel, but no policies are enforced.
Detect the state of AppArmor by inspecting
/sys/kernel/security/apparmor/profiles. If
cat /sys/kernel/security/apparmor/profiles reports a
list of profiles, AppArmor is running. If it is empty and returns nothing,
AppArmor is stopped. If the file does not exist, AppArmor is unloaded.
Manage AppArmor with systemctl. It lets you perform the
following operations:
sudo systemctl start apparmor
Behavior depends on the state of AppArmor. If it is not activated,
start activates and starts it, putting it in the
running state. If it is stopped, start causes the
re-scan of AppArmor profiles usually found in
/etc/apparmor.d and puts AppArmor in the running
state. If AppArmor is already running, start reports a
warning and takes no action.
Already running processes need to be restarted to apply the AppArmor profiles on them.
sudo systemctl stop apparmor
Stops AppArmor if it is running by removing all profiles from kernel
memory, effectively disabling all access controls, and putting AppArmor
into the stopped state. If the AppArmor is already stopped,
stop tries to unload the profiles again, but nothing
happens.
sudo systemctl reload apparmor
Causes the AppArmor module to re-scan the profiles in
/etc/apparmor.d without unconfining running
processes. Freshly created profiles are enforced and recently deleted
ones are removed from the /etc/apparmor.d
directory.
The AppArmor module profile definitions are stored in the
/etc/apparmor.d directory as plain text files. For a
detailed description of the syntax of these files, refer to
Chapter 21, Profile Components and Syntax.
All files in the /etc/apparmor.d directory are
interpreted as profiles and are loaded as such. Renaming files in that
directory is not an effective way of preventing profiles from being
loaded. You must remove profiles from this directory to prevent them from
being read and evaluated effectively, or call
aa-disable on the profile, which will create a
symbolic link in /etc/apparmor.d/disabled/.
You can use a text editor, such as vi, to access and
make changes to these profiles. The following sections contain detailed
steps for building profiles:
Refer to Section 24.3, “Adding or Creating an AppArmor Profile”
To add or create an AppArmor profile for an application, you can use a systemic or stand-alone profiling method, depending on your needs. Learn more about these two approaches in Section 24.7, “Two Methods of Profiling”.
The following steps describe the procedure for editing an AppArmor profile:
If you are not currently logged in as root, enter
su in a terminal window.
Enter the root password when prompted.
Go to the profile directory with cd
/etc/apparmor.d/.
Enter ls to view all profiles currently installed.
Open the profile to edit in a text editor, such as vim.
Make the necessary changes, then save the profile.
Restart AppArmor by entering systemctl reload
apparmor in a terminal window.
aa-remove-unkown will unload all profiles that
are not stored in /etc/apparmor.d, for example
automatically generated LXD profiles. This may compromise the
security of the system. Use the -n parameter to
list all profiles that will be unloaded.
To unload all profiles that are not longer in
/etc/apparmor.d/ AppArmor profiles, run:
tux >sudoaa-remove-unknown
You can print a list of profiles that will be removed:
tux >sudoaa-remove-unknown -n
The following steps describe the procedure for deleting an AppArmor profile.
Remove the AppArmor definition from the kernel:
tux >sudoapparmor_parser -R /etc/apparmor.d/PROFILE
Remove the definition file:
tux >sudorm /etc/apparmor.d/PROFILEtux >sudorm /var/lib/apparmor/cache/PROFILE
Given the syntax for AppArmor profiles in Chapter 21, Profile Components and Syntax, you could create profiles without using the tools. However, the effort involved would be substantial. To avoid such a situation, use the AppArmor tools to automate the creation and refinement of profiles.
There are two ways to approach AppArmor profile creation. Tools are available for both methods.
A method suitable for profiling small applications that have a finite runtime, such as user client applications like mail clients. For more information, refer to Section 24.7.1, “Stand-Alone Profiling”.
A method suitable for profiling many programs at once and for profiling applications that may run for days, weeks, or continuously across reboots, such as network server applications like Web servers and mail servers. For more information, refer to Section 24.7.2, “Systemic Profiling”.
Automated profile development becomes more manageable with the AppArmor tools:
Decide which profiling method suits your needs.
Perform a static analysis. Run either aa-genprof or
aa-autodep, depending on the profiling method
chosen.
Enable dynamic learning. Activate learning mode for all profiled programs.
Stand-alone profile generation and improvement is managed by a program
called aa-genprof. This method is easy because
aa-genprof takes care of everything, but is limited
because it requires aa-genprof to run for the entire
duration of the test run of your program (you cannot reboot the machine
while you are still developing your profile).
To use aa-genprof for the stand-alone method of
profiling, refer to
Section 24.7.3.8, “aa-genprof—Generating Profiles”.
This method is called systemic profiling because it
updates all of the profiles on the system at once, rather than focusing
on the one or few targeted by aa-genprof or
stand-alone profiling. With systemic profiling, profile construction and
improvement are somewhat less automated, but more flexible. This method
is suitable for profiling long-running applications whose behavior
continues after rebooting, or many programs at once.
Build an AppArmor profile for a group of applications as follows:
Create profiles for the individual programs that make up your application.
Although this approach is systemic, AppArmor only monitors those
programs with profiles and their children. To get AppArmor to consider
a program, you must at least have aa-autodep create
an approximate profile for it. To create this approximate profile,
refer to
Section 24.7.3.1, “aa-autodep—Creating Approximate Profiles”.
Put relevant profiles into learning or complain mode.
Activate learning or complain mode for all profiled programs by entering
tux >sudoaa-complain /etc/apparmor.d/*
in a terminal window while logged in as root. This
functionality is also available through the YaST Profile Mode
module, described in
Section 23.4.2, “Changing the Mode of Individual Profiles”.
When in learning mode, access requests are not blocked, even if the profile dictates that they should be. This enables you to run through several tests (as shown in Step 3) and learn the access needs of the program so it runs properly. With this information, you can decide how secure to make the profile.
Refer to Section 24.7.3.2, “aa-complain—Entering Complain or Learning Mode” for more detailed instructions for using learning or complain mode.
Exercise your application.
Run your application and exercise its functionality. How much to
exercise the program is up to you, but you need the program to access
each file representing its access needs. Because the execution is not
being supervised by aa-genprof, this step can go on
for days or weeks and can span complete system reboots.
Analyze the log.
In systemic profiling, run aa-logprof directly
instead of letting aa-genprof run it (as in
stand-alone profiling). The general form of
aa-logprof is:
tux >sudoaa-logprof [ -d /path/to/profiles ] [ -f /path/to/logfile ]
Refer to
Section 24.7.3.9, “aa-logprof—Scanning the System Log”
for more information about using aa-logprof.
This generates optimal profiles. An iterative approach captures smaller data sets that can be trained and reloaded into the policy engine. Subsequent iterations generate fewer messages and run faster.
Edit the profiles.
You should review the profiles that have been generated. You
can open and edit the profiles in
/etc/apparmor.d/ using a text editor.
Return to enforce mode.
This is when the system goes back to enforcing the rules of the
profiles, not only logging information. This can be done manually by
removing the flags=(complain) text from the
profiles or automatically by using the aa-enforce
command, which works identically to the aa-complain
command, except it sets the profiles to enforce mode. This
functionality is also available through the YaST Profile Mode
module, described in
Section 23.4.2, “Changing the Mode of Individual Profiles”.
To ensure that all profiles are taken out of complain mode and put
into enforce mode, enter aa-enforce
/etc/apparmor.d/*.
Re-scan all profiles.
To have AppArmor re-scan all of the profiles and change the enforcement
mode in the kernel, enter systemctl reload
apparmor.
All of the AppArmor profiling utilities are provided by the
apparmor-utils RPM package and are stored in
/usr/sbin. Each tool has a different purpose.
This creates an approximate profile for the program or application
selected. You can generate approximate profiles for binary executables
and interpreted script programs. The resulting profile is called
“approximate” because it does not necessarily contain all
of the profile entries that the program needs to be properly confined
by AppArmor. The minimum aa-autodep approximate
profile has, at minimum, a base include directive, which contains basic
profile entries needed by most programs. For certain types of programs,
aa-autodep generates a more expanded profile. The
profile is generated by recursively calling ldd(1)
on the executables listed on the command line.
To generate an approximate profile, use the
aa-autodep program. The program argument can be
either the simple name of the program, which
aa-autodep finds by searching your shell's path
variable, or it can be a fully qualified path. The program itself can
be of any type (ELF binary, shell script, Perl script, etc.).
aa-autodep generates an approximate profile to
improve through the dynamic profiling that follows.
The resulting approximate profile is written to the
/etc/apparmor.d directory using the AppArmor
profile naming convention of naming the profile after the absolute path
of the program, replacing the forward slash (/)
characters in the path with period (.) characters.
The general syntax of aa-autodep is to enter the
following in a terminal window:
tux >sudoaa-autodep [ -d /PATH/TO/PROFILES ] [PROGRAM1 PROGRAM2...]
If you do not enter the program name or names, you are prompted for
them. /path/to/profiles overrides the
default location of /etc/apparmor.d, should you
keep profiles in a location other than the default.
To begin profiling, you must create profiles for each main executable service that is part of your application (anything that might start without being a child of another program that already has a profile). Finding all such programs depends on the application in question. Here are several strategies for finding such programs:
If all the programs to profile are in one directory and there are no
other programs in that directory, the simple command
aa-autodep
/path/to/your/programs/* creates basic
profiles for all programs in that directory.
You can run your application and use the standard Linux
pstree command to find all processes running.
Then manually hunt down the location of these programs and run the
aa-autodep for each one. If the programs are in
your path, aa-autodep finds them for you. If they
are not in your path, the standard Linux command
find might be helpful in finding your programs.
Execute find / -name '
MY_APPLICATION' -print to determine an
application's path (MY_APPLICATION being
an example application). You may use wild cards if appropriate.
The complain or learning mode tool (aa-complain)
detects violations of AppArmor profile rules, such as the profiled
program accessing files not permitted by the profile. The violations
are permitted, but also logged. To improve the profile, turn complain
mode on, run the program through a suite of tests to generate log
events that characterize the program's access needs, then postprocess
the log with the AppArmor tools to transform log events into improved
profiles.
Manually activating complain mode (using the command line) adds a flag
to the top of the profile so that /bin/foo becomes
/bin/foo flags=(complain). To use complain mode,
open a terminal window and enter one of the following lines as
root:
If the example program (PROGRAM1) is in your path, use:
tux >sudoaa-complain [PROGRAM1 PROGRAM2 ...]
If the program is not in your path, specify the entire path as follows:
tux >sudoaa-complain /sbin/PROGRAM1
If the profiles are not in /etc/apparmor.d, use
the following to override the default location:
tux >sudoaa-complain /path/to/profiles/PROGRAM1
Specify the profile for /sbin/program1 as follows:
tux >sudoaa-complain /etc/apparmor.d/sbin.PROGRAM1
Each of the above commands activates the complain mode for the profiles
or programs listed. If the program name does not include its entire
path, aa-complain searches $PATH for
the program. For example, aa-complain /usr/sbin/*
finds profiles associated with all of the programs in
/usr/sbin and puts them into complain mode.
aa-complain /etc/apparmor.d/* puts all of the
profiles in /etc/apparmor.d into complain mode.
YaST offers a graphical front-end for toggling complain and enforce mode. See Section 23.4.2, “Changing the Mode of Individual Profiles” for information.
aa-decode will decode hex-encoded strings in the
AppArmor log output. It can also process the audit log on standard
input, convert any hex-encoded AppArmor log entries, and display them on
standard output.
Use aa-disable to disable the enforcement mode for
one or more AppArmor profiles. This command will unload the profile from
the kernel, and prevent the profile from being loaded on AppArmor
start-up. Use aa-enforce or
aa-complain utilities to change this behavior.
aa-easyprof provides an easy-to-use interface for
AppArmor profile generation. aa-easyprof supports the
use of templates and profile groups to quickly profile an application.
While aa-easyprof can help with profile generation,
its utility is dependent on the quality of the templates, profile
groups and abstractions used. Also, this tool may create a profile that
is less restricted than when creating a profile manually or with
aa-genprof and aa-logprof.
For more information, see the man page of
aa-easyprof (8).
The enforce mode detects violations of AppArmor profile rules, such as the profiled program accessing files not permitted by the profile. The violations are logged and not permitted. The default is for enforce mode to be enabled. To log the violations only, but still permit them, use complain mode.
Manually activating enforce mode (using the command line) removes the
complain flag from the top of the profile so that /bin/foo
flags=(complain) becomes /bin/foo. To use
enforce mode, open a terminal window and enter one of the following
lines.
If the example program (PROGRAM1) is in your path, use:
tux >sudoaa-enforce [PROGRAM1 PROGRAM2 ...]
If the program is not in your path, specify the entire path, as follows:
tux >sudoaa-enforce /sbin/PROGRAM1
If the profiles are not in /etc/apparmor.d, use the following to override the default location:
tux >sudoaa-enforce -d /path/to/profiles/ program1
Specify the profile for /sbin/program1 as follows:
tux >sudoaa-enforce /etc/apparmor.d/sbin.PROGRAM1
Each of the above commands activates the enforce mode for the profiles and programs listed.
If you do not enter the program or profile names, you are prompted to
enter one. /path/to/profiles overrides the
default location of /etc/apparmor.d.
The argument can be either a list of programs or a list of profiles. If
the program name does not include its entire path,
aa-enforce searches $PATH for the
program.
YaST offers a graphical front-end for toggling complain and enforce mode. See Section 23.4.2, “Changing the Mode of Individual Profiles” for information.
Use aa-exec to launch a program confined by a
specified profile and/or profile namespace. If both a profile and
namespace are specified, the program will be confined by the profile in
the new namespace. If only a profile namespace is specified, the
profile name of the current confinement will be used. If neither a
profile nor namespace is specified, the command will be run using the
standard profile attachment—as if you did not use the
aa-exec command.
For more information on the command's options, see its manual page
man 8 aa-exec.
aa-genprof is AppArmor's profile generating utility.
It runs aa-autodep on the specified program,
creating an approximate profile (if a profile does not already exist
for it), sets it to complain mode, reloads it into AppArmor, marks the
log, and prompts the user to execute the program and exercise its
functionality. Its syntax is as follows:
tux >sudoaa-genprof [ -d /path/to/profiles ] PROGRAM
To create a profile for the Apache Web server program httpd2-prefork,
do the following as root:
Enter systemctl stop apache2.
Next, enter aa-genprof httpd2-prefork.
Now aa-genprof does the following:
Resolves the full path of httpd2-prefork using your shell's path
variables. You can also specify a full path. On openSUSE Leap,
the default full path is
/usr/sbin/httpd2-prefork.
Checks to see if there is an existing profile for httpd2-prefork.
If there is one, it updates it. If not, it creates one using the
aa-autodep as described in
Section 24.7.3, “Summary of Profiling Tools”.
Puts the profile for this program into learning or complain mode so
that profile violations are logged, but are permitted to proceed. A
log event looks like this (see
/var/log/audit/audit.log):
type=APPARMOR_ALLOWED msg=audit(1189682639.184:20816): \ apparmor="DENIED" operation="file_mmap" parent=2692 \ profile="/usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT" \ name="/var/log/apache2/access_log-20140116" pid=28730 comm="httpd2-prefork" \ requested_mask="::r" denied_mask="::r" fsuid=30 ouid=0
If you are not running the audit daemon, the AppArmor events are
logged directly to systemd journal (see
Chapter 11, journalctl: Query the systemd Journal):
Sep 13 13:20:30 K23 kernel: audit(1189682430.672:20810): \ apparmor="DENIED" operation="file_mmap" parent=2692 \ profile="/usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT" \ name="/var/log/apache2/access_log-20140116" pid=28730 comm="httpd2-prefork" \ requested_mask="::r" denied_mask="::r" fsuid=30 ouid=0
They also can be viewed using the dmesg command:
audit(1189682430.672:20810): apparmor="DENIED" \ operation="file_mmap" parent=2692 \ profile="/usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT" \ name="/var/log/apache2/access_log-20140116" pid=28730 comm="httpd2-prefork" \ requested_mask="::r" denied_mask="::r" fsuid=30 ouid=0
Marks the log with a beginning marker of log events to consider. For example:
Sep 13 17:48:52 figwit root: GenProf: e2ff78636296f16d0b5301209a04430d
When prompted by the tool, run the application to profile in another
terminal window and perform as many of the application functions as
possible. Thus, the learning mode can log the files and directories
to which the program requires access to function properly.
For example, in a new terminal window, enter systemctl start
apache2.
Select from the following options that are available in the
aa-genprof terminal window after you have executed
the program function:
S runs aa-genprof on the system
log from where it was marked when aa-genprof was
started and reloads the profile. If system events exist in the log,
AppArmor parses the learning mode log files. This generates a series
of questions that you must answer to guide
aa-genprof in generating the security profile.
F exits the tool.
If requests to add hats appear, proceed to Chapter 25, Profiling Your Web Applications Using ChangeHat.
Answer two types of questions:
A resource is requested by a profiled program that is not in the profile (see Example 24.1, “Learning Mode Exception: Controlling Access to Specific Resources”).
A program is executed by the profiled program and the security domain transition has not been defined (see Example 24.2, “Learning Mode Exception: Defining Permissions for an Entry”).
Each of these categories results in a series of questions that you must answer to add the resource or program to the profile. Example 24.1, “Learning Mode Exception: Controlling Access to Specific Resources” and Example 24.2, “Learning Mode Exception: Defining Permissions for an Entry” provide examples of each one. Subsequent steps describe your options in answering these questions.
Dealing with execute accesses is complex. You must decide how to proceed with this entry regarding which execute permission type to grant to this entry:
Reading log entries from /var/log/audit/audit.log. Updating AppArmor profiles in /etc/apparmor.d. Profile: /usr/sbin/cupsd Program: cupsd Execute: /usr/lib/cups/daemon/cups-lpd Severity: unknown (I)nherit / (P)rofile / (C)hild / (N)ame / (U)nconfined / (X)ix / (D)eny / Abo(r)t / (F)inish
The child inherits the parent's profile, running with the same
access controls as the parent. This mode is useful when a
confined program needs to call another confined program without
gaining the permissions of the target's profile or losing the
permissions of the current profile. This mode is often used when
the child program is a helper application,
such as the /usr/bin/mail client using
less as a pager.
The child runs using its own profile, which must be loaded into the kernel. If the profile is not present, attempts to execute the child fail with permission denied. This is most useful if the parent program is invoking a global service, such as DNS lookups or sending mail with your system's MTA.
Choose the (Px) option to scrub the environment of environment variables that could modify execution behavior when passed to the child process.
Sets up a transition to a subprofile. It is like px/Px transition, except to a child profile.
Choose the (Cx) option to scrub the environment of environment variables that could modify execution behavior when passed to the child process.
The child runs completely unconfined without any AppArmor profile applied to the executed resource.
Choose the (Ux) option to scrub the environment of environment variables that could modify execution behavior when passed to the child process. Note that running unconfined profiles introduces a security vulnerability that could be used to evade AppArmor. Only use it as a last resort.
This permission denotes that the program running under the
profile can access the resource using the mmap system call with
the flag PROT_EXEC. This means that the data
mapped in it can be executed. You are prompted to include this
permission if it is requested during a profiling run.
Adds a deny rule to the profile, and
permanently prevents the program from accessing the specified
directory path entries. AppArmor then continues to the next
event.
Aborts aa-logprof, losing all rule changes
entered so far and leaving all profiles unmodified.
Closes aa-logprof, saving all rule changes
entered so far and modifying all profiles.
Example 24.2, “Learning Mode Exception: Defining Permissions for an Entry”
shows AppArmor suggest allowing a globbing pattern
/var/run/nscd/* for reading, then using an
abstraction to cover common Apache-related access rules.
Profile: /usr/sbin/httpd2-prefork Path: /var/run/nscd/dbSz9CTr Mode: r Severity: 3 1 - /var/run/nscd/dbSz9CTr [2 - /var/run/nscd/*] (A)llow / [(D)eny] / (G)lob / Glob w/(E)xt / (N)ew / Abo(r)t / (F)inish / (O)pts Adding /var/run/nscd/* r to profile. Profile: /usr/sbin/httpd2-prefork Path: /proc/11769/attr/current Mode: w Severity: 9 [1 - #include <abstractions/apache2-common>] 2 - /proc/11769/attr/current 3 - /proc/*/attr/current (A)llow / [(D)eny] / (G)lob / Glob w/(E)xt / (N)ew / Abo(r)t / (F)inish / (O)pts Adding #include <abstractions/apache2-common> to profile.
AppArmor provides one or more paths or includes. By entering the option number, select the desired options then proceed to the next step.
Not all of these options are always presented in the AppArmor menu.
#include
This is the section of an AppArmor profile that refers to an include file, which procures access permissions for programs. By using an include, you can give the program access to directory paths or files that are also required by other programs. Using includes can reduce the size of a profile. It is good practice to select includes when suggested.
This is accessed by selecting as described in the next step. For information about globbing syntax, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”.
This is the literal path to which the program needs access so that it can run properly.
After you select the path or include, process it as an entry into the AppArmor profile by selecting or . If you are not satisfied with the directory path entry as it is displayed, you can also it.
The following options are available to process the learning mode entries and build the profile:
Allows access to the selected directory path.
Allows access to the specified directory path entries. AppArmor suggests file permission access. For more information, refer to Section 21.7, “File Permission Access Modes”.
Prevents the program from accessing the specified directory path entries. AppArmor then continues to the next event.
Prompts you to enter your own rule for this event, allowing you to specify a regular expression. If the expression does not actually satisfy the event that prompted the question in the first place, AppArmor asks for confirmation and lets you reenter the expression.
Select a specific path or create a general rule using wild cards that match a broader set of paths. To select any of the offered paths, enter the number that is printed in front of the path then decide how to proceed with the selected item.
For more information about globbing syntax, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”.
This modifies the original directory path while retaining the
file name extension. For example,
/etc/apache2/file.ext becomes
/etc/apache2/*.ext, adding the wild card
(asterisk) in place of the file name. This allows the program to
access all files in the suggested directory that end with the
.ext extension.
Aborts aa-logprof, losing all rule changes
entered so far and leaving all profiles unmodified.
Closes aa-logprof, saving all rule changes
entered so far and modifying all profiles.
To view and edit your profile using vi, enter
vi /etc/apparmor.d/
PROFILENAME in a terminal window. To
enable syntax highlighting when editing an AppArmor profile in vim,
use the commands :syntax on then :set
syntax=apparmor. For more information about vim and syntax
highlighting, refer to
Section 24.7.3.14, “apparmor.vim”.
Restart AppArmor and reload the profile set including the newly
created one using the systemctl reload
apparmor command.
Like the graphical front-end for building AppArmor profiles, the
YaST Add Profile Wizard, aa-genprof also
supports the use of the local profile repository under
/etc/apparmor/profiles/extras
and the remote AppArmor profile repository.
To use a profile from the local repository, proceed as follows:
Start aa-genprof as described above.
If aa-genprof finds an inactive local profile, the
following lines appear on your terminal window:
Profile: /usr/bin/opera [1 - Inactive local profile for /usr/bin/opera] [(V)iew Profile] / (U)se Profile / (C)reate New Profile / Abo(r)t / (F)inish
To use this profile, press U () and follow the profile generation procedure outlined above.
To examine the profile before activating it, press V ().
To ignore the existing profile, press C () and follow the profile generation procedure outlined above to create the profile from scratch.
Leave aa-genprof by pressing F
() when you are done and save your changes.
aa-logprof is an interactive tool used to review the
complain and enforce mode events found in the log entries in
/var/log/audit/audit.log, or directly in the
systemd journal (see Chapter 11, journalctl: Query the systemd Journal), and
generate new entries in AppArmor security profiles.
When you run aa-logprof, it begins to scan the log
files produced in complain and enforce mode and, if there are new
security events that are not covered by the existing profile set, it
gives suggestions for modifying the profile.
aa-logprof uses this information to observe program
behavior.
If a confined program forks and executes another program,
aa-logprof sees this and asks the user which
execution mode should be used when launching the child process. The
execution modes ix, px,
Px, ux,
Ux, cx,
Cx, and named profiles, are options for starting
the child process. If a separate profile exists for the child process,
the default selection is Px. If one does not
exist, the profile defaults to ix. Child processes
with separate profiles have aa-autodep run on them
and are loaded into AppArmor, if it is running.
When aa-logprof exits, profiles are updated with the
changes. If AppArmor is active, the updated profiles are reloaded and,
if any processes that generated security events are still running in
the null-XXXX profiles (unique profiles temporarily created in complain
mode), those processes are set to run under their proper profiles.
To run aa-logprof, enter
aa-logprof into a terminal window while logged in as
root. The following options can be used for
aa-logprof:
aa-logprof -d/path/to/profile/directory/
Specifies the full path to the location of the profiles if the
profiles are not located in the standard directory,
/etc/apparmor.d/.
aa-logprof -f/path/to/logfile/
Specifies the full path to the location of the log file if the log
file is not located in the default directory or
/var/log/audit/audit.log.
aa-logprof -m "string marker in logfile"
Marks the starting point for aa-logprof to look
in the system log. aa-logprof ignores all events
in the system log before the specified mark. If the mark contains
spaces, it must be surrounded by quotes to work correctly. For
example:
root # aa-logprof -m "17:04:21"or
root # aa-logprof -m e2ff78636296f16d0b5301209a04430d
aa-logprof scans the log, asking you how to handle
each logged event. Each question presents a numbered list of AppArmor
rules that can be added by pressing the number of the item on the list.
By default, aa-logprof looks for profiles in
/etc/apparmor.d/. Often running
aa-logprof as root is enough to update the
profile. However, there might be times when you need to search archived
log files, such as if the program exercise period exceeds the log
rotation window (when the log file is archived and a new log file is
started). If this is the case, you can enter zcat -f `ls
-1tr /path/to/logfile*` |
aa-logprof -f -.
The following is an example of how aa-logprof
addresses httpd2-prefork accessing the file
/etc/group. [] indicates the
default option.
In this example, the access to /etc/group is part of
httpd2-prefork accessing name services. The appropriate response is
1, which includes a predefined set of AppArmor rules.
Selecting 1 to #include the name
service package resolves all of the future questions pertaining to DNS
lookups and makes the profile less brittle in that any changes to
DNS configuration and the associated name service profile package can
be made once, rather than needing to revise many profiles.
Profile: /usr/sbin/httpd2-prefork Path: /etc/group New Mode: r [1 - #include <abstractions/nameservice>] 2 - /etc/group [(A)llow] / (D)eny / (N)ew / (G)lob / Glob w/(E)xt / Abo(r)t / (F)inish
Select one of the following responses:
Triggers the default action, which is, in this example, allowing access to the specified directory path entry.
Allows access to the specified directory path entries. AppArmor suggests file permission access. For more information about this, refer to Section 21.7, “File Permission Access Modes”.
Permanently prevents the program from accessing the specified directory path entries. AppArmor then continues to the next event.
Prompts you to enter your own rule for this event, allowing you to specify whatever form of regular expression you want. If the expression entered does not actually satisfy the event that prompted the question in the first place, AppArmor asks for confirmation and lets you reenter the expression.
Select either a specific path or create a general rule using wild cards that matches on a broader set of paths. To select any of the offered paths, enter the number that is printed in front of the paths then decide how to proceed with the selected item.
For more information about globbing syntax, refer to Section 21.6, “Profile Names, Flags, Paths, and Globbing”.
This modifies the original directory path while retaining the file
name extension. For example,
/etc/apache2/file.ext becomes
/etc/apache2/*.ext, adding the wild card
(asterisk) in place of the file name. This allows the program to
access all files in the suggested directory that end with the
.ext extension.
Aborts aa-logprof, losing all rule changes
entered so far and leaving all profiles unmodified.
Closes aa-logprof, saving all rule changes
entered so far and modifying all profiles.
For example, when profiling vsftpd, see this question:
Profile: /usr/sbin/vsftpd Path: /y2k.jpg New Mode: r [1 - /y2k.jpg] (A)llow / [(D)eny] / (N)ew / (G)lob / Glob w/(E)xt / Abo(r)t / (F)inish
Several items of interest appear in this question. First, note that
vsftpd is asking for a path entry at the top of the tree, even though
vsftpd on openSUSE Leap
serves FTP files from /srv/ftp by default. This is
because vsftpd uses chroot and, for the portion of the code inside the
chroot jail, AppArmor sees file accesses in terms of the chroot
environment rather than the global absolute path.
The second item of interest is that you should grant FTP read
access to all JPEG files in the directory, so you could use
and use the suggested path of
/*.jpg. Doing so collapses all previous rules
granting access to individual .jpg files and
forestalls any future questions pertaining to access to
.jpg files.
Finally, you should grant more general access to FTP files. If
you select in the last entry,
aa-logprof replaces the suggested path of
/y2k.jpg with /*.
Alternatively, you should grant even more access to the entire
directory tree, in which case you could use the
path option and enter /**.jpg (which would grant
access to all .jpg files in the entire directory
tree) or /** (which would grant access to all
files in the directory tree).
These items deal with read accesses. Write accesses are similar, except that it is good policy to be more conservative in your use of regular expressions for write accesses. Dealing with execute accesses is more complex. Find an example in Example 24.1, “Learning Mode Exception: Controlling Access to Specific Resources”.
In the following example, the /usr/bin/mail mail
client is being profiled and aa-logprof has
discovered that /usr/bin/mail executes
/usr/bin/less as a helper application to
“page” long mail messages. Consequently, it presents this
prompt:
/usr/bin/nail -> /usr/bin/less (I)nherit / (P)rofile / (C)hild / (N)ame / (U)nconfined / (X)ix / (D)eny
The actual executable file for /usr/bin/mail
turns out to be /usr/bin/nail, which is not a
typographical error.
The program /usr/bin/less appears to be a
simple one for scrolling through text that is more than one screen
long and that is in fact what /usr/bin/mail is
using it for. However, less is actually a large
and powerful program that uses many other helper applications, such
as tar and rpm.
Run less on a tar file or an RPM file and it shows
you the inventory of these containers.
You do not want to run rpm automatically when
reading mail messages (that leads directly to a Microsoft*
Outlook–style virus attack, because RPM has the power to
install and modify system programs), so, in this case, the best choice
is to use . This results in the less program
executed from this context running under the profile for
/usr/bin/mail. This has two consequences:
You need to add all of the basic file accesses for
/usr/bin/less to the profile for
/usr/bin/mail.
You can avoid adding the helper applications, such as
tar and rpm, to the
/usr/bin/mail profile so that when
/usr/bin/mail runs
/usr/bin/less in this context, the less program
is far less dangerous than it would be without AppArmor protection.
Another option is to use the Cx execute modes. For more information
on execute modes, see Section 21.8, “Execute Modes”.
In other circumstances, you might instead want to use the
option. This has the following effects on
aa-logprof:
The rule written into the profile uses px/Px, which forces the transition to the child's own profile.
aa-logprof constructs a profile for the child and
starts building it, in the same way that it built the parent profile,
by assigning events for the child process to the child's profile and
asking the aa-logprof user questions. The profile
will also be applied if you run the child as a stand-alone program.
If a confined program forks and executes another program,
aa-logprof sees this and asks the user which
execution mode should be used when launching the child process. The
execution modes of inherit, profile, unconfined, child, named profile,
or an option to deny the execution are presented.
If a separate profile exists for the child process, the default
selection is profile. If a profile does not exist, the default is
inherit. The inherit option, or ix, is described in
Section 21.7, “File Permission Access Modes”.
The profile option indicates that the child program should run in its
own profile. A secondary question asks whether to sanitize the
environment that the child program inherits from the parent. If you
choose to sanitize the environment, this places the execution modifier
Px in your AppArmor profile. If you select not to
sanitize, px is placed in the profile and no
environment sanitizing occurs. The default for the execution mode is
Px if you select profile execution mode.
The unconfined execution mode is not recommended and should only be
used in cases where there is no other option to generate a profile for
a program reliably. Selecting unconfined opens a warning dialog asking
for confirmation of the choice. If you are sure and choose
, a second dialog ask whether to sanitize the
environment. To use the execution mode Ux in your
profile, select . To use the execution mode
ux in your profile instead, select
. The default value selected is
Ux for unconfined execution mode.
Selecting ux or Ux is very dangerous and provides
no enforcement of policy (from a security perspective) of the
resulting execution behavior of the child program.
The aa-unconfined command examines open network
ports on your system, compares that to the set of profiles loaded on
your system, and reports network services that do not have AppArmor
profiles. It requires root privileges and that it not be
confined by an AppArmor profile.
aa-unconfined must be run as root to
retrieve the process executable link from the
/proc file system. This program is susceptible to
the following race conditions:
An unlinked executable is mishandled
A process that dies between netstat(8) and further
checks is mishandled
This program lists processes using TCP and UDP only. In short, this program is unsuitable for forensics use and is provided only as an aid to profiling all network-accessible processes in the lab.
aa-notify is a handy utility that displays AppArmor
notifications in your desktop environment. This is very convenient if
you do not want to inspect the AppArmor log file, but rather let the
desktop inform you about events that violate the policy. To enable
AppArmor desktop notifications, run aa-notify:
tux >sudoaa-notify -p -u USERNAME --display DISPLAY_NUMBER
where USERNAME is your user name under which
you are logged in, and DISPLAY_NUMBER is the
X Window display number you are currently using, such as
:0. The process is run in the background, and shows
a notification each time a deny event happens.
The active X Window display number is saved in the
$DISPLAY variable, so you can use
--display $DISPLAY to avoid finding out the current
display number.
aa-notify Message in GNOME #
With the -s DAYS option,
you can also configure aa-notify to display a
summary of notifications for the specified number of past days. For
more information on aa-notify, see its man page
man 8 aa-notify.
A syntax highlighting file for the vim text editor highlights various features of an AppArmor profile with colors. Using vim and the AppArmor syntax mode for vim, you can see the semantic implications of your profiles with color highlighting. Use vim to view and edit your profile by typing vim at a terminal window.
To enable the syntax coloring when you edit an AppArmor profile in vim,
use the commands :syntax on then :set
syntax=apparmor. To make sure vim recognizes the edited file
type correctly as an AppArmor profile, add
# vim:ft=apparmor
at the end of the profile.
vim comes with AppArmor highlighting automatically
enabled for files in /etc/apparmor.d/.
When you enable this feature, vim colors the lines of the profile for you:
Comments
Ordinary read access lines
Capability statements and complain flags
Lines that grant write access
Lines that grant execute permission (either ix or px)
Lines that grant unconfined access (ux)
Syntax errors that will not load properly into the AppArmor modules
Use the apparmor.vim and
vim man pages and the :help
syntax from within the vim editor for further vim help about
syntax highlighting. The AppArmor syntax is stored in
/usr/share/vim/current/syntax/apparmor.vim.
The following list contains the most important files and directories used by the AppArmor framework. If you intend to manage and troubleshoot your profiles manually, make sure that you know about these files and directories:
/sys/kernel/security/apparmor/profiles
Virtualized file representing the currently loaded set of profiles.
/etc/apparmor/
Location of AppArmor configuration files.
/etc/apparmor/profiles/extras/
A local repository of profiles shipped with AppArmor, but not enabled by default.
/etc/apparmor.d/
Location of profiles, named with the convention of replacing the
/ in paths with . (not for the
root /) so profiles are easier to manage. For
example, the profile for the program
/usr/sbin/smbd is named
usr.sbin.smbd.
/etc/apparmor.d/abstractions/
Location of abstractions.
/etc/apparmor.d/program-chunks/
Location of program chunks.
/proc/*/attr/current
Check this file to review the confinement status of a process and the
profile that is used to confine the process. The ps
auxZ command retrieves this information
automatically.
An AppArmor® profile represents the security policy for an individual program instance or process. It applies to an executable program, but if a portion of the program needs different access permissions than other portions, the program can “change hats” to use a different security context, distinctive from the access of the main program. This is known as a hat or subprofile.
ChangeHat enables programs to change to or from a hat within an AppArmor profile. It enables you to define security at a finer level than the process. This feature requires that each application be made “ChangeHat-aware”, meaning that it is modified to make a request to the AppArmor module to switch security domains at specific times during the application execution. One example of a ChangeHat-aware application is the Apache Web server.
A profile can have an arbitrary number of subprofiles, but there are only
two levels: a subprofile cannot have further child profiles. A subprofile
is written as a separate profile. Its name consists of the name of the
containing profile followed by the subprofile name, separated by a
^.
Subprofiles are either stored in the same file as the parent profile, or in a separate file. The latter case is recommended on sites with many hats—it allows the policy caching to handle changes at the per hat level. If all the hats are in the same file as the parent profile, then the parent profile and all hats must be recompiled.
An external subprofile that is going to be used as a hat, must begin with
the word hat or the ^ character.
The following two subprofiles cannot be used as a hat:
/foo//bar { }or
profile /foo//bar { }While the following two are treated as hats:
^/foo//bar { }or
hat /foo//bar { } # this syntax is not highlighted in vimNote that the security of hats is considerably weaker than that of full profiles. Using certain types of bugs in a program, an attacker may be able to escape from a hat into the containing profile. This is because the security of hats is determined by a secret key handled by the containing process, and the code running in the hat must not have access to the key. Thus, change_hat is most useful with application servers, where a language interpreter (such as PERL, PHP, or Java) is isolating pieces of code such that they do not have direct access to the memory of the containing process.
The rest of this chapter describes using change_hat with
Apache, to contain Web server components run using mod_perl and mod_php.
Similar approaches can be used with any application server by providing an
application module similar to the mod_apparmor described next in
Section 25.1.2, “Location and Directory Directives”.
For more information, see the change_hat man page.
mod_apparmor #
AppArmor provides a mod_apparmor module (package apache2-mod-apparmor) for the Apache
program. This module
makes the Apache Web server ChangeHat aware. Install it along with Apache.
When Apache is ChangeHat-aware, it checks for the following customized AppArmor security profiles in the order given for every URI request that it receives.
URI-specific hat. For example,
^www_app_name/templates/classic/images/bar_left.gif
DEFAULT_URI
HANDLING_UNTRUSTED_INPUT
If you install
apache2-mod-apparmor, make
sure the module is enabled, and then restart Apache by executing the
following command:
tux > a2enmod apparmor && sudo systemctl reload apache2
Apache is configured by placing directives in plain text configuration
files. The main configuration file is usually
/etc/apache2/httpd.conf. When you compile Apache,
you can indicate the location of this file. Directives can be placed in
any of these configuration files to alter the way Apache behaves. When
you make changes to the main configuration files, you need to reload
Apache with sudo systemctl reload apache2, so
the changes are recognized.
<VirtualHost> and </VirtualHost> directives are used to enclose a group of directives that will apply only to a particular virtual host. For more information on Apache virtual host directives, refer to http://httpd.apache.org/docs/2.4/en/mod/core.html#virtualhost.
The ChangeHat-specific configuration keyword is
AADefaultHatName. It is used similarly to
AAHatName, for example, AADefaultHatName
My_Funky_Default_Hat.
It allows you to specify a default hat to be used for virtual hosts and
other Apache server directives, so that you can have different defaults
for different virtual hosts. This can be overridden by the
AAHatName directive and is checked for only if there
is not a matching AAHatName or hat named by the URI.
If the AADefaultHatName hat does not exist, it falls
back to the DEFAULT_URI hat if it exists/
If none of those are matched, it goes back to the “parent” Apache hat.
Location and directory directives specify hat names in the program configuration file so the Apache calls the hat regarding its security. For Apache, you can find documentation about the location and directory directives at http://httpd.apache.org/docs/2.4/en/sections.html.
The location directive example below specifies that, for a given
location, mod_apparmor should use a specific hat:
<Location /foo/> AAHatName MY_HAT_NAME </Location>
This tries to use MY_HAT_NAME for any URI beginning
with /foo/ (/foo/,
/foo/bar,
/foo/cgi/path/blah_blah/blah, etc.).
The directory directive works similarly to the location directive, except it refers to a path in the file system as in the following example:
<Directory "/srv/www/www.example.org/docs"> # Note lack of trailing slash AAHatName example.org </Directory>
In the previous section you learned about mod_apparmor
and the way it helps you to secure a specific Web application. This
section walks you through a real-life example of creating a hat for a Web
application, and using AppArmor's change_hat feature to secure it.
Note that this chapter focuses on AppArmor's command line tools, as
YaST's AppArmor module has limited functionality.
For illustration purposes, let us choose the Web application called Adminer (http://www.adminer.org/en/). It is a full-featured SQL database management tool written in PHP, yet consisting of a single PHP file. For Adminer to work, you need to set up an Apache Web server, PHP and its Apache module, and one of the database drivers available for PHP—MariaDB in this example. You can install the required packages with
zypper in apache2 apache2-mod_apparmor apache2-mod_php5 php5 php5-mysql
To set up the Web environment for running Adminer, follow these steps:
Make sure apparmor and php5
modules are enabled for Apache. To enable the modules in any case, use:
tux > a2enmod apparmor php5and then restart Apache with
tux >sudosystemctl restart apache2
Make sure MariaDB is running. If unsure, restart it with
tux >sudosystemctl restart mysql
Download Adminer from http://www.adminer.org, copy
it to /srv/www/htdocs/adminer/, and rename it to
adminer.php, so that its full path is
/srv/www/htdocs/adminer/adminer.php.
Test Adminer in your Web browser by entering
http://localhost/adminer/adminer.php in its URI
address field. If you installed Adminer to a remote server, replace
localhost with the real host name of the server.
If you encounter problems viewing the Adminer login page,
try to look for help in the Apache error log
/var/log/apache2/error.log. Another
reason you cannot access the Web page may be
that your Apache is already under AppArmor control and its AppArmor
profile is too tight to permit viewing Adminer. Check it
with aa-status, and if needed, set Apache
temporarily in complain mode with
root # sudo aa-complain usr.sbin.httpd2-prefork
After the Web environment for Adminer is ready, you need to configure
Apache's mod_apparmor, so that AppArmor can detect accesses to Adminer and
change to the specific “hat”.
mod_apparmor #
Apache has several configuration files under
/etc/apache2/ and
/etc/apache2/conf.d/. Choose your preferred one
and open it in a text editor. In this example, the
vim editor is used to create a new configuration
file /etc/apache2/conf.d/apparmor.conf.
tux >sudovim /etc/apache2/conf.d/apparmor.conf
Copy the following snippet into the edited file.
<Directory /srv/www/htdocs/adminer> AAHatName adminer </Directory>
It tells Apache to let AppArmor know about a change_hat event when the
Web user accesses the directory /adminer (and any
file/directory inside) in Apache's document root. Remember, we placed
the adminer.php application there.
Save the file, close the editor, and restart Apache with
tux >sudosystemctl restart apache2
Apache now knows about our Adminer and changing a “hat” for
it. It is time to create the related hat for Adminer in the AppArmor
configuration. If you do not have an AppArmor profile yet, create one
before proceeding. Remember that if your Apache's main binary is
/usr/sbin/httpd2-prefork, then the related profile
is named /etc/apparmor.d/usr.sbin.httpd2-prefork.
Open (or create one if it does not exist) the file
/etc/apparmor.d/usr.sbin.httpd2-prefork in a text
editor. Its contents should be similar to the following:
#include <tunables/global>
/usr/sbin/httpd2-prefork {
#include <abstractions/apache2-common>
#include <abstractions/base>
#include <abstractions/php5>
capability kill,
capability setgid,
capability setuid,
/etc/apache2/** r,
/run/httpd.pid rw,
/usr/lib{,32,64}/apache2*/** mr,
/var/log/apache2/** rw,
^DEFAULT_URI {
#include <abstractions/apache2-common>
/var/log/apache2/** rw,
}
^HANDLING_UNTRUSTED_INPUT {
#include <abstractions/apache2-common>
/var/log/apache2/** w,
}
}
Before the last closing curly bracket (}), insert
the following section:
^adminer flags=(complain) {
}
Note the (complain) addition after the hat
name—it tells AppArmor to leave the
adminer hat in complain mode. That is because
we need to learn the hat profile by accessing Adminer later on.
Save the file, and then restart AppArmor, then Apache.
tux >sudosystemctl reload apparmor apache2
Check if the adminer hat really is in complain
mode.
tux >sudoaa-status apparmor module is loaded. 39 profiles are loaded. 37 profiles are in enforce mode. [...] /usr/sbin/httpd2-prefork /usr/sbin/httpd2-prefork//DEFAULT_URI /usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT [...] 2 profiles are in complain mode. /usr/bin/getopt /usr/sbin/httpd2-prefork//adminer [...]
As we can see, the httpd2-prefork//adminer is loaded
in complain mode.
Our last task is to find out the right set of rules for the
adminer hat. That is why we set the
adminer hat into complain mode—the
logging facility collects useful information about the access
requirements of adminer.php as we use it via the Web
browser. aa-logprof then helps us with creating the
hat's profile.
adminer Hat #
Open Adminer in the Web browser. If you installed it locally, then the
URI is http://localhost/adminer/adminer.php.
Choose the database engine you want to use (MariaDB in our case), and log in to Adminer using the existing database user name and password. You do not need to specify the database name as you can do so after logging in. Perform any operations with Adminer you like—create a new database, create a new table for it, set user privileges, and so on.
After the short testing of Adminer's user interface, switch back to console and examine the log for collected data.
tux >sudoaa-logprof Reading log entries from /var/log/messages. Updating AppArmor profiles in /etc/apparmor.d. Complain-mode changes: Profile: /usr/sbin/httpd2-prefork^adminer Path: /dev/urandom Mode: r Severity: 3 1 - #include <abstractions/apache2-common> [...] [8 - /dev/urandom] [(A)llow] / (D)eny / (G)lob / Glob w/(E)xt / (N)ew / Abo(r)t / (F)inish / (O)pts
From the aa-logprof message, it is clear that our
new adminer hat was correctly detected:
Profile: /usr/sbin/httpd2-prefork^adminer
The aa-logprof command will ask you to pick the
right rule for each discovered AppArmor event. Specify the one you want
to use, and confirm with . For more information
on working with the aa-genprof and
aa-logprof interface, see
Section 24.7.3.8, “aa-genprof—Generating Profiles”.
aa-logprof usually offers several valid rules for
the examined event. Some are
abstractions—predefined sets of rules
affecting a specific common group of targets. Sometimes it is useful
to include such an abstraction instead of a direct URI rule:
1 - #include <abstractions/php5> [2 - /var/lib/php5/sess_3jdmii9cacj1e3jnahbtopajl7p064ai242]
In the example above, it is recommended hitting and confirming with to allow the abstraction.
After the last change, you will be asked to save the changed profile.
The following local profiles were changed. Would you like to save them? [1 - /usr/sbin/httpd2-prefork] (S)ave Changes / [(V)iew Changes] / Abo(r)t
Hit to save the changes.
Set the profile to enforce mode with aa-enforce
tux >sudoaa-enforce usr.sbin.httpd2-prefork
and check its status with aa-status
tux >sudoaa-status apparmor module is loaded. 39 profiles are loaded. 38 profiles are in enforce mode. [...] /usr/sbin/httpd2-prefork /usr/sbin/httpd2-prefork//DEFAULT_URI /usr/sbin/httpd2-prefork//HANDLING_UNTRUSTED_INPUT /usr/sbin/httpd2-prefork//adminer [...]
As you can see, the //adminer hat jumped from
complain to enforce mode.
Try to run Adminer in the Web browser, and if you encounter problems
running it, switch it to the complain mode, repeat the steps that
previously did not work well, and update the profile with
aa-logprof until you are satisfied with the
application's functionality.
The profile ^adminer is only available in the
context of a process running under the parent profile
usr.sbin.httpd2-prefork.
When you use the dialog (for instructions, refer to Section 23.2, “Editing Profiles”) or when you add a new profile using (for instructions, refer to Section 23.1, “Manually Adding a Profile”), you are given the option of adding hats (subprofiles) to your AppArmor profiles. Add a ChangeHat subprofile from the window as in the following.
pam_apparmor #
An AppArmor profile applies to an executable program; if a portion of the
program needs different access permissions than other portions need, the
program can change hats via change_hat to a different role, also known as
a subprofile. The pam_apparmor PAM module allows
applications to confine authenticated users into subprofiles based on
group names, user names, or a default profile. To accomplish this,
pam_apparmor needs to be registered as a PAM
session module.
The package pam_apparmor is not installed by
default, you can install it using YaST or zypper.
Details about how to set up and configure
pam_apparmor can be found in
/usr/share/doc/packages/pam_apparmor/README after the
package has been installed. For details on PAM, refer to
Chapter 2, Authentication with PAM.
After creating profiles and immunizing your applications, openSUSE® Leap becomes more efficient and better protected as long as you perform AppArmor® profile maintenance (which involves analyzing log files, refining your profiles, backing up your set of profiles and keeping it up-to-date). You can deal with these issues before they become a problem by setting up event notification by e-mail, updating profiles from system log entries by running the aa-logprof tool, and dealing with maintenance issues.
When you receive a security event rejection, examine the access violation
and determine if that event indicated a threat or was part of normal
application behavior. Application-specific knowledge is required to make
the determination. If the rejected action is part of normal application
behavior, run aa-logprof at the command line.
If the rejected action is not part of normal application behavior, this access should be considered a possible intrusion attempt (that was prevented) and this notification should be passed to the person responsible for security within your organization.
In a production environment, you should plan on maintaining profiles for all of the deployed applications. The security policies are an integral part of your deployment. You should plan on taking steps to back up and restore security policy files, plan for software changes, and allow any needed modification of security policies that your environment dictates.
Backing up profiles might save you from having to re-profile all your programs after a disk crash. Also, if profiles are changed, you can easily restore previous settings by using the backed up files.
Back up profiles by copying the profile files to a specified directory.
You should first archive the files into one file. To do this, open a
terminal window and enter the following as root:
tux >sudotar zclpf profiles.tgz /etc/apparmor.d
The simplest method to ensure that your security policy files are
regularly backed up is to include the directory
/etc/apparmor.d in the list of directories that
your backup system archives.
You can also use scp or a file manager like
Nautilus to store the files on some kind of storage media, the
network, or another computer.
Maintenance of security profiles includes changing them if you decide that your system requires more or less security for its applications. To change your profiles in AppArmor, refer to Section 23.2, “Editing Profiles”.
When you add a new application version or patch to your system, you should always update the profile to fit your needs. You have several options, depending on your company's software deployment strategy. You can deploy your patches and upgrades into a test or production environment. The following explains how to do this with each method.
If you intend to deploy a patch or upgrade in a test environment, the
best method for updating your profiles is to run
aa-logprof in a terminal as root. For
detailed instructions, refer to
Section 24.7.3.9, “aa-logprof—Scanning the System Log”.
If you intend to deploy a patch or upgrade directly into a production
environment, the best method for updating your profiles is to monitor
the system frequently to determine if any new rejections should be added
to the profile and update as needed using aa-logprof.
For detailed instructions, refer to
Section 24.7.3.9, “aa-logprof—Scanning the System Log”.
This chapter outlines maintenance-related tasks. Learn how to update AppArmor® and get a list of available man pages providing basic help for using the command line tools provided by AppArmor. Use the troubleshooting section to learn about some common problems encountered with AppArmor and their solutions. Report defects or enhancement requests for AppArmor following the instructions in this chapter.
Updates for AppArmor packages are provided in the same way as any other update for openSUSE Leap. Retrieve and apply them exactly like for any other package that ships as part of openSUSE Leap.
There are man pages available for your use. In a terminal, enter
man apparmor to open the AppArmor man page. Man pages
are distributed in sections numbered 1 through 8. Each section is
specific to a category of documentation:
|
Section |
Category |
|---|---|
|
1 |
User commands |
|
2 |
System calls |
|
3 |
Library functions |
|
4 |
Device driver information |
|
5 |
Configuration file formats |
|
6 |
Games |
|
7 |
High level concepts |
|
8 |
Administrator commands |
The section numbers are used to distinguish man pages from each other.
For example, exit(2) describes the exit system
call, while exit(3) describes the exit C library
function.
The AppArmor man pages are:
aa-audit(8)
aa-autodep(8)
aa-complain(8)
aa-decode(8)
aa-disable(8)
aa-easyprof(8)
aa-enforce(8)
aa-enxec(8)
aa-genprof(8)
aa-logprof(8)
aa-notify(8)
aa-status(8)
aa-unconfined(8)
aa_change_hat(8)
logprof.conf(5)
apparmor.d(5)
apparmor.vim(5)
apparmor(7)
apparmor_parser(8)
apparmor_status(8)
Find more information about the AppArmor product at:
http://wiki.apparmor.net. Find the product
documentation for AppArmor in the installed system at
/usr/share/doc/manual.
There is a mailing list for AppArmor that users can post to or join to communicate with developers. See https://lists.ubuntu.com/mailman/listinfo/apparmor for details.
This section lists the most common problems and error messages that may occur using AppArmor.
If you notice odd application behavior or any other type of application
problem, you should first check the reject messages in the log files to
see if AppArmor is too closely constricting your application. If you
detect reject messages that indicate that your application or service is
too closely restricted by AppArmor, update your profile to properly
handle your use case of the application. Do this with
aa-logprof
(Section 24.7.3.9, “aa-logprof—Scanning the System Log”).
If you decide to run your application or service without AppArmor
protection, remove the application's profile from
/etc/apparmor.d or move it to another location.
If you have been using previous versions of AppArmor and have updated your system (but kept your old set of profiles) you might notice some applications which seemed to work perfectly before you updated behaving strangely, or not working.
This version of AppArmor introduces a set of new features to the profile syntax and the AppArmor tools that might cause trouble with older versions of the AppArmor profiles. Those features are:
File Locking
Network Access Control
The SYS_PTRACE Capability
Directory Path Access
The current version of AppArmor mediates file locking and introduces a
new permission mode (k) for this. Applications
requesting file locking permission might misbehave or fail altogether if
confined by older profiles which do not explicitly contain permissions
to lock files. If you suspect this being the case, check the log file
under /var/log/audit/audit.log for entries like the
following:
type=AVC msg=audit(1389862802.727:13939): apparmor="DENIED" \ operation="file_lock" parent=2692 profile="/usr/bin/opera" \ name="/home/tux/.qt/.qtrc.lock" pid=28730 comm="httpd2-prefork" \ requested_mask="::k" denied_mask="::k" fsuid=30 ouid=0
Update the profile using the aa-logprof command as
outlined below.
The new network access control syntax based on the network family and
type specification, described in
Section 21.5, “Network Access Control”, might cause application
misbehavior or even stop applications from working. If you notice a
network-related application behaving strangely, check the log file under
/var/log/audit/audit.log for entries like the
following:
type=AVC msg=audit(1389864332.233:13947): apparmor="DENIED" \ operation="socket_create" family="inet" parent=29985 profile="/bin/ping" \ sock_type="raw" pid=30251 comm="ping"
This log entry means that our example application,
/bin/ping in this case, failed to get AppArmor's
permission to open a network connection. This permission needs to be
explicitly stated to make sure that an application has network access.
To update the profile to the new syntax, use the
aa-logprof command as outlined below.
The current kernel requires the SYS_PTRACE
capability, if a process tries to access files in
/proc/PID/fd/*. New
profiles need an entry for the file and the capability, where old
profiles only needed the file entry. For example:
/proc/*/fd/** rw,
in the old syntax would translate to the following rules in the new syntax:
capability SYS_PTRACE, /proc/*/fd/** rw,
To update the profile to the new syntax, use the YaST Update
Profile Wizard or the aa-logprof command as outlined
below.
With this version of AppArmor, a few changes have been made to the profile rule syntax to better distinguish directory from file access. Therefore, some rules matching both file and directory paths in the previous version might now match a file path only. This could lead to AppArmor not being able to access a crucial directory, and thus trigger misbehavior of your application and various log messages. The following examples highlight the most important changes to the path syntax.
Using the old syntax, the following rule would allow access to files and
directories in /proc/net. It would allow directory
access only to read the entries in the directory, but not give access to
files or directories under the directory, for example
/proc/net/dir/foo would be matched by the asterisk
(*), but as foo is a file or directory under
dir, it cannot be accessed.
/proc/net/* r,
To get the same behavior using the new syntax, you need two rules
instead of one. The first allows access to the file under
/proc/net and the second allows access to
directories under /proc/net. Directory access can
only be used for listing the contents, not actually accessing files or
directories underneath the directory.
/proc/net/* r, /proc/net/*/ r,
The following rule works similarly both under the old and the new
syntax, and allows access to both files and directories under
/proc/net (but does not allow a directory listing
of /proc/net/ itself):
/proc/net/** r,
To distinguish file access from directory access using the above
expression in the new syntax, use the following two rules. The first one
only allows to recursively access directories under
/proc/net while the second one explicitly allows
for recursive file access only.
/proc/net/**/ r, /proc/net/**[^/] r,
The following rule works similarly both under the old and the new syntax
and allows access to both files and directories beginning with
foo under /proc/net:
/proc/net/foo** r,
To distinguish file access from directory access in the new syntax and
use the ** globbing pattern, use the following two
rules. The first one would have matched both files and directories in
the old syntax, but only matches files in the new syntax because of the
missing trailing slash. The second rule matched neither file nor
directory in the old syntax, but matches directories only in the new
syntax:
/proc/net/**foo r, /proc/net/**foo/ r,
The following rules illustrate how the use of the ?
globbing pattern has changed. In the old syntax, the first rule would
have matched both files and directories (four characters, last character
could be any but a slash). In the new syntax, it matches only files
(trailing slash is missing). The second rule would match nothing in the
old profile syntax, but matches directories only in the new syntax. The
last rule matches explicitly matches a file called
bar under /proc/net/foo?.
Using the old syntax, this rule would have applied to both files and
directories:
/proc/net/foo? r, /proc/net/foo?/ r, /proc/net/foo?/bar r,
To find and resolve issues related to syntax changes, take some time after the update to check the profiles you want to keep and proceed as follows for each application you kept the profile for:
Put the application's profile into complain mode:
tux >sudoaa-complain/path/to/application
Log entries are made for any actions violating the current profile, but the profile is not enforced and the application's behavior not restricted.
Run the application covering all the tasks you need this application to be able to perform.
Update the profile according to the log entries made while running the application:
tux >sudoaa-logprof/path/to/application
Put the resulting profile back into enforce mode:
tux >sudoaa-enforce/path/to/application
After installing additional Apache modules (like
apache2-mod_apparmor) or making configuration changes
to Apache, profile Apache again to find out if additional rules need to
be added to the profile. If you do not profile Apache again, it could be
unable to start properly or be unable to serve Web pages.
Run aa-disable
PROGRAMNAME to disable the
profile for PROGRAMNAME. This command creates
a symbolic link to the profile in
/etc/apparmor.d/disable/. To reactivate
the profile, delete the link, and run systemctl reload
apparmor.
Managing profiles with AppArmor requires you to have access to the log of
the system on which the application is running. So you do not need to
run the application on your profile build host as long as you have
access to the machine that runs the application. You can run the
application on one system, transfer the logs
(/var/log/audit.log or, if
audit is not installed, journalctl | grep
-i apparmor > path_to_logfile) to your profile build host
and run aa-logprof -f
PATH_TO_LOGFILE.
Manually editing AppArmor profiles can introduce syntax errors. If you attempt to start or restart AppArmor with syntax errors in your profiles, error results are shown. This example shows the syntax of the entire parser error.
localhost:~ # rcapparmor start Loading AppArmor profiles AppArmor parser error in /etc/apparmor.d/usr.sbin.squid at line 410: syntax error, unexpected TOK_ID, expecting TOK_MODE Profile /etc/apparmor.d/usr.sbin.squid failed to load
Using the AppArmor YaST tools, a graphical error message indicates which profile contained the error and requests you to fix it.
To fix a syntax error, log in to a terminal window as root,
open the profile, and correct the syntax. Reload the profile set with
systemctl reload apparmor.
vi
The editor vi on openSUSE Leap supports syntax
highlighting for AppArmor profiles. Lines containing syntax errors will
be displayed with a red background.
The developers of AppArmor are eager to deliver products of the highest quality. Your feedback and your bug reports help us keep the quality high. Whenever you encounter a bug in AppArmor, file a bug report against this product:
Use your Web browser to go to http://bugzilla.opensuse.org/ and click .
Enter the account data of your SUSE account and click . If you do not have a SUSE account, click and provide the required data.
If your problem has already been reported, check this bug report and add extra information to it, if necessary.
If your problem has not been reported yet, select from the top navigation bar and proceed to the page.
Select the product against which to file the bug. In your case, this would be your product's release. Click .
Select the product version, component (AppArmor in this case), hardware platform, and severity.
Enter a brief headline describing your problem and add a more elaborate description including log files. You may create attachments to your bug report for screenshots, log files, or test cases.
Click after you have entered all the details to send your report to the developers.
See profile foundation classes below.
Apache is a freely-available Unix-based Web server. It is currently the most commonly used Web server on the Internet. Find more information about Apache at the Apache Web site at http://www.apache.org.
AppArmor confines applications and limits the actions they are permitted to take. It uses privilege confinement to prevent attackers from using malicious programs on the protected server and even using trusted applications in unintended ways.
Pattern in system or network activity that alerts of a possible virus or hacker attack. Intrusion detection systems might use attack signatures to distinguish between legitimate and potentially malicious activity.
By not relying on attack signatures, AppArmor provides "proactive" instead of "reactive" defense from attacks. This is better because there is no window of vulnerability where the attack signature must be defined for AppArmor as it does for products using attack signatures.
Graphical user interface. Refers to a software front-end meant to provide an attractive and easy-to-use interface between a computer user and application. Its elements include windows, icons, buttons, cursors, and scrollbars.
File name substitution. Instead of specifying explicit file name paths,
you can use helper characters * (substitutes any
number of characters except special ones such as /
or ?) and ? (substitutes exactly
one character) to address multiple files/directories at once.
** is a special substitution that matches any file
or directory below the current directory.
Host intrusion prevention. Works with the operating system kernel to block abnormal application behavior in the expectation that the abnormal behavior represents an unknown attack. Blocks malicious packets on the host at the network level before they can “hurt” the application they target.
A means of restricting access to objects that is based on fixed security attributes assigned to users, files, and other objects. The controls are mandatory in the sense that they cannot be modified by users or their programs.
AppArmor profile completely defines what system resources an individual application can access, and with what privileges.
Profile building blocks needed for common application activities, such as DNS lookup and user authentication.
The RPM Package Manager. An open packaging system available for anyone to use. It works on Red Hat Linux, openSUSE Leap, and other Linux and Unix systems. It is capable of installing, uninstalling, verifying, querying, and updating computer software packages. See http://www.rpm.org/ for more information.
Secure Shell. A service that allows you to access your server from a remote computer and issue text commands through a secure connection.
AppArmor provides streamlined access control for network services by specifying which files each program is allowed to read, write, and execute. This ensures that each program does what it is supposed to do and nothing else.
Universal resource identifier. The generic term for all types of names and addresses that refer to objects on the World Wide Web. A URL is one kind of URI.
Uniform Resource Locator. The global address of documents and other resources on the Web.
The first part of the address indicates what protocol to use and the second part specifies the IP address or the domain name where the resource is located.
For example, when you visit http://www.opensuse.org, you are
using the HTTP protocol, as the beginning of the URL indicates.
An aspect of a system or network that leaves it open to attack. Characteristics of computer systems that allow an individual to keep it from correctly operating or that allows unauthorized users to take control of the system. Design, administrative, or implementation weaknesses or flaws in hardware, firmware, or software. If exploited, a vulnerability could lead to an unacceptable impact in the form of unauthorized access to information or the disruption of critical processing.
In this chapter, you will learn how to set up and manage SELinux on openSUSE Leap. The following topics are covered:
In this chapter, you will learn how to set up and manage SELinux on openSUSE Leap. The following topics are covered:
Why Use SELinux?
Understanding SELinux
Setting Up SELinux
Managing SELinux
SELinux was developed as an additional Linux security solution that uses the security framework in the Linux kernel. The purpose was to allow for a more granular security policy that goes beyond what is offered by the default existing permissions of Read, Write, and Execute, and beyond assigning permissions to the different capabilities that are available on Linux. SELinux does this by trapping all system calls that reach the kernel, and denying them by default. This means that on a system that has SELinux enabled and nothing else configured, nothing will work. To allow your system to do anything, as an administrator you will need to write rules and put them in a policy.
An example explains why a solution such as SELinux (or its counterpart AppArmor) is needed:
“One morning, I found out that my server was hacked. The server was
running a fully patched SLES installation. A firewall was configured on
it and no unnecessary services were offered by this server. Further
analysis revealed that the hacker had come in through a vulnerable PHP script
that was a part of one of the Apache virtual hosts that were running on
this server. The intruder had managed to get access to a shell, using the
wwwrun account that was used by
the Apache Web server. As this
wwwrun user, the intruder had
created several scripts in the /var/tmp and the
/tmp directories, which were a part of a botnet that
was launching a Distributed Denial of Service attack against several
servers.”
The interesting thing about this hack is that it occurred on a server where nothing was really wrong. All permissions where set OK, but the intruder had managed to get into the system. What becomes clearly evident from this example is that in some cases additional security is needed—a security that goes beyond what is offered by using SELinux. As a less complete and less complex alternative, AppArmor can be used.
AppArmor confines specific processes in their abilities to read/write and execute files (and other things). Its view is mostly that things that happen inside a process cannot escape.
SELinux instead uses labels attached to objects (for example, files, binaries, network sockets) and uses them to determine privilege boundaries, thereby building up a level of confinement that can span more than a process or even the whole system.
SELinux was developed by the US National Security Agency (NSA), and since the beginning Red Hat has been heavily involved in its development. The first version of SELinux was offered in the era of Red Hat Enterprise Linux 4™, around the year 2006. In the beginning it offered support for essential services only, but over the years it has developed into a system that offers many rules that are collected in policies to offer protection to a broad range of services.
SELinux was developed in accordance with some certification standards like Common Criteria and FIPS 140. Because some customers specifically requested solutions that met these standards, SELinux rapidly became relatively popular.
As an alternative to SELinux, Immunix, a company that was purchased by Novell in 2005, had developed AppArmor. AppArmor was built on top of the same security principles as SELinux, but took a completely different approach, where it was possible to restrict services to exactly what they needed to do by using an easy to use wizard-driven procedure. Nevertheless, AppArmor has never reached the same status as SELinux, even if there are some good arguments to secure a server with AppArmor rather than with SELinux.
Because many organizations are requesting SELinux to be in the Linux distributions they are using, SUSE is offering support for the SELinux framework in openSUSE Leap. This does not mean that the default installation of openSUSE Leap will switch from AppArmor to SELinux in the near future.
The SELinux framework is supported on openSUSE Leap. This means that openSUSE Leap offers all binaries and libraries you need to be able to use SELinux on your server. You may however miss some software that you may be familiar with from other Linux distributions.
SELinux support is at a fairly early stage in openSUSE Leap, which means that unexpected behavior may occur. To limit this risk as much as possible, it is best to use only the binaries that have been provided by default on openSUSE Leap.
Before starting the configuration of SELinux, you should know a bit about how SELinux is organized. Three components play a role:
The security framework in the Linux kernel
The SELinux libraries and binaries
The SELinux policy
The default kernel of openSUSE Leap supports SELinux and the tools that are needed to manage it. The most important part of the work of the administrator with regard to SELinux is managing the policy.
In the SELinux policy, security labels are applied to different objects on a Linux server. These objects typically are users, ports, processes and files. Using these security labels, rules are created that define what is and what is not allowed on a server. Remember, by default SELinux denies everything, and by creating the appropriate rules you can allow the access that is strictly necessary. Rules should therefore exist for all programs that you want to use on a system. Alternatively, you should configure parts of a system to run in unconfined mode, which means that specific ports, programs, users, files and directories are not protected by SELinux. This mode is useful if you only want to use SELinux to protect some essential services, while you are not specifically worried about other services. To get a really secure system, you should avoid this.
To ensure the appropriate protection of your system, you need an SELinux policy. This must be a tailor-made policy in which all files are provided with a label, and all services and users have a security label as well to express which files and directories can be accessed by which user and processed on the server. Developing such a policy is a tremendous amount of work.
The complexity of SELinux is also one of the main arguments against using it. Because a typical Linux system is so very complex, it is easy to overlook something and leave an opening that intruders can abuse to get into your system. And even if it is set up completely the way it should be, it still is very hard for an administrator to overlook all aspects with SELinux. With regard to the complexity, AppArmor takes a completely different approach and works with automated procedures that allow the administrator to set up AppArmor protection and understand exactly what is happening.
Note that a freely available SELinux policy might work on your server, but is unlikely to offer the same protection as a custom policy. SUSE also does not support third-party policies.
As mentioned, the policy is the key component in SELinux. It defines
rules that specify which objects can access which files, directories,
ports and processes on a system. To do this, a security context is
defined for all of these. On an SELinux system where the policy has been
applied to label the file system, you can use the ls
-Z command on any directory to find the security context for
the files in that directory.
Example 30.1: “Security Context Settings Using ls -Z”
shows the security context settings for the directories in the
/ directory of a openSUSE Leap system with an
SELinux-labeled file system.
ls -Z #ls -Z system_u:object_r:bin_t bin system_u:object_r:boot_t boot system_u:object_r:device_t dev system_u:object_r:etc_t etc system_u:object_r:home_root_t home system_u:object_r:lib_t lib system_u:object_r:lib_t lib64 system_u:object_r:lost_found_t lost+found system_u:object_r:mnt_t media system_u:object_r:mnt_t mnt system_u:object_r:usr_t opt system_u:object_r:proc_t proc system_u:object_r:default_t root system_u:object_r:bin_t sbin system_u:object_r:security_t selinux system_u:object_r:var_t srv system_u:object_r:sysfs_t sys system_u:object_r:tmp_t tmp system_u:object_r:usr_t usr system_u:object_r:var_t var
The most important line in the security context is the context type. This is the part of the security context that ends in _t. It tells SELinux which kind of access the object is allowed. In the policy, rules are specified to define which type of user or which type of role has access to which type of context. For example, this can happen by using a rule like the following:
allow user_t bin_t:file {read execute gettattr};
This example rule states that the user who has the context type
user_t (this user is called
the source object) is allowed to access objects of class "file"
with the context type bin_t (the target), using the
permissions read, execute and getattr.
The standard policy that you are going to use contains a huge amount of rules. To make it more manageable, policies are often split into modules. This allows administrator to switch protection on or off for different parts of the system.
When compiling the policy for your system, you will have a choice to either work with a modular policy, or a monolithic policy, where one huge policy is used to protect everything on your system. It is strongly recommended to use a modular policy and not a monolithic policy. Modular policies are much easier to manage.
The easiest way to make sure that all SELinux components are installed is by using YaST. The procedure described below shows what to do on an installed openSUSE Leap:
Log in to your server as root
and start YaST.
Select ›
› and select the entire C category for installation.
› and make sure that ,
and are
selected. Now enter the keyword selinux and click
. You now see a list of packages.
Make sure that all the packages you have found are selected and click to install them.
After installing the SELinux packages, you need to modify the GRUB 2 boot loader. Do this from YaST, select › › . Now add the following parameters to the :
security=selinux selinux=1 enforcing=0
These options are used for the following purposes:
security=selinux
This option tells the kernel to use SELinux and not AppArmor
selinux=1
This option switches on SELinux
enforcing=0
This option puts SELinux in permissive mode. In this mode, SELinux is
fully functional, but does not enforce any of the security settings in
the policy. Use this mode for configuring your system. To switch on
SELinux protection, when the system is fully operational, change the
option to enforcing=1 and add
SELINUX=enforcing in
/etc/selinux/config.
After installing the SELinux packages and enabling the SELinux GRUB 2 boot options, reboot your server to activate the configuration.
The policy is an essential component of SELinux. openSUSE Leap 42.3 includes the minimum SELinux reference policy in the package selinux-policy-minimum. The examples in this chapter refer to this policy if not stated otherwise.
To install a different policy, you need to download it from https://build.opensuse.org/package/binaries/security:SELinux/selinux-policy?repository=SLE_12 and install:
tux >sudozypper in selinux-policy-targeted-VERSION_NUMBER.noarch.rpm
After installing the policy, you are ready to start file system labeling. Run
tux >sudorestorecon -Rp /
to start the /sbin/setfiles command to label all files
on your system. The
/etc/selinux/minimum/contexts/files/file_contexts
input file is used. The file_contexts file needs to
match your actual file system as much as possible. Otherwise, it can lead
to a completely unbootable system. If that happens, modify the records in
file_contexts with the semanage
fcontext command to match the real structure of the file system
your server is using. For example
tux >sudosemanage fcontext -a -t samba_share_t /etc/example_file
changes the file type from the default etc_t to
samba_share_t and adds the following record to the
related file_contexts.local file:
/etc/example_file unconfined_u:object_r:samba_share_t:s0
Then run
tux >sudorestorecon -v /etc/example_file
for the type change to take effect.
Before doing this, make sure to read the rest of this chapter, so you
fully understand how context type is applied to files and directories. Do
not forget to make a backup of the file_contexts
file before starting.
nobody
While using semanage, you may get a message that
complains about the home directory of
nobody. In this case, change
the login shell of user nobody
to /sbin/nologin. Then the settings of
nobody match the current
policy settings.
After another reboot SELinux should be operational. To verify this, use
the command sestatus -v. It should give you an output
similar to
Example 30.2: “Verifying that SELinux is functional”.
tux >sudosestatus -v SELinux status: enabled SELinuxfs mount: /selinux Current mode: permissive Mode from config file: permissive Policy version: 26 Policy from config file: minimum Process contexts: Current context: root:staff_r:staff_t Init context: system_u:system_r:init_t /sbin/mingetty system_u:system_r:sysadm_t /usr/sbin/sshd system_u:system_r:sshd_t File contexts: Controlling term: root:object_r:user_devpts_t /etc/passwd system_u:object_r:etc_t /etc/shadow system_u:object_r:shadow_t /bin/bash system_u:object_r:shell_exec_t /bin/login system_u:object_r:login_exec_t /bin/sh system_u:object_r:bin_t -> system_u:object_r:shell_exec_t /sbin/agetty system_u:object_r:getty_exec_t /sbin/init system_u:object_r:init_exec_t /sbin/mingetty system_u:object_r:getty_exec_t /usr/sbin/sshd system_u:object_r:sshd_exec_t /lib/libc.so.6 system_u:object_r:lib_t -> system_u:object_r:lib_t /lib/ld-linux.so.2 system_u:object_r:lib_t -> system_u:object_r:ld_so_t
At this point you have a completely functional SELinux system and it is
time to further configure it. In the current status, SELinux is
operational but not in enforcing mode. This means that it does not limit
you in doing anything, it logs everything that it should be doing if it
were in enforcing mode. This is good, because based on the log files you
can find what it is that it would prevent you from doing. As a first
test, put SELinux in enforcing mode and find out if you can still use
your server after doing so: check that the option
enforcing=1 is set in the GRUB 2 configuration file,
while SELINUX=enforcing is set in
/etc/selinux/config. Reboot your server and see if
it still comes up the way you expect it to. If it does, leave it like
that and start modifying the server in a way that everything works as
expected. However, you may not even be able to boot the server properly.
In that case, switch back to the mode where SELinux is not enforcing and
start tuning your server.
Before you start tuning your server, verify the SELinux installation.
You have already used the command sestatus -v to view
the current mode, process, and file contexts. Next, run
tux >sudosemanage boolean -l
which lists all Boolean switches that are available, and at the same time verifies that you can access the policy. Example 30.3, “Getting a List of Booleans and Verifying Policy Access” shows part of the output of this command.
tux >sudosemanage boolean -l SELinux boolean Description ftp_home_dir -> off ftp_home_dir mozilla_read_content -> off mozilla_read_content spamassassin_can_network -> off spamassassin_can_network httpd_can_network_relay -> off httpd_can_network_relay openvpn_enable_homedirs -> off openvpn_enable_homedirs gpg_agent_env_file -> off gpg_agent_env_file allow_httpd_awstats_script_anon_write -> off allow_httpd_awstats_script_anon_write httpd_can_network_connect_db -> off httpd_can_network_connect_db allow_ftpd_full_access -> off allow_ftpd_full_access samba_domain_controller -> off samba_domain_controller httpd_enable_cgi -> off httpd_enable_cgi virt_use_nfs -> off virt_use_nfs
Another command that outputs useful information at this stage is
tux >sudosemanage fcontext -l
It shows the default file context settings as provided by the policy (see Example 30.4: “Getting File Context Information” for partial output of this command).
tux >sudosemanage fcontext -l /var/run/usb(/.*)? all files system_u:object_r:hotplug_var_run_t /var/run/utmp regular file system_u:object_r:initrc_var_run_t /var/run/vbe.* regular file system_u:object_r:hald_var_run_t /var/run/vmnat.* socket system_u:object_r:vmware_var_run_t /var/run/vmware.* all files system_u:object_r:vmware_var_run_t /var/run/watchdog\.pid regular file system_u:object_r:watchdog_var_run_t /var/run/winbindd(/.*)? all files system_u:object_r:winbind_var_run_t /var/run/wnn-unix(/.*) all files system_u:object_r:canna_var_run_t /var/run/wpa_supplicant(/.*)? all files system_u:object_r:NetworkManager_var_run_t /var/run/wpa_supplicant-global socket system_u:object_r:NetworkManager_var_run_t /var/run/xdmctl(/.*)? all files system_u:object_r:xdm_var_run_t /var/run/yiff-[0-9]+\.pid regular file system_u:object_r:soundd_var_run_t
The base SELinux configuration is now operational and it can now be configured to secure your server. In SELinux, an additional set of rules is used to define exactly which process or user can access which files, directories, or ports. To do this, SELinux applies a context to every file, directory, process, and port. This context is a security label that defines how this file, directory, process, or port should be treated. These context labels are used by the SELinux policy, which defines exactly what should be done with the context labels. By default, the policy blocks all non-default access, which means that, as an administrator, you need to enable all features that are non-default on your server.
As already mentioned, files, directories, and ports can be labeled.
Within each label, different contexts are used. To be able to perform
your daily administration work, the type context is what you are most
interested in. As an administrator, you will mostly work with the type
context. Many commands allow you to use the -Z option
to list current context settings. In
Example 30.5: “The default context for directories in the root directory”
you can see what the context settings are for the directories in the
root directory.
tux >sudols -Z dr-xr-xr-x. root root system_u:object_r:bin_t:s0 bin dr-xr-xr-x. root root system_u:object_r:boot_t:s0 boot drwxr-xr-x. root root system_u:object_r:cgroup_t:s0 cgroup drwxr-xr-x+ root root unconfined_u:object_r:default_t:s0 data drwxr-xr-x. root root system_u:object_r:device_t:s0 dev drwxr-xr-x. root root system_u:object_r:etc_t:s0 etc drwxr-xr-x. root root system_u:object_r:home_root_t:s0 home dr-xr-xr-x. root root system_u:object_r:lib_t:s0 lib dr-xr-xr-x. root root system_u:object_r:lib_t:s0 lib64 drwx------. root root system_u:object_r:lost_found_t:s0 lost+found drwxr-xr-x. root root system_u:object_r:mnt_t:s0 media drwxr-xr-x. root root system_u:object_r:autofs_t:s0 misc drwxr-xr-x. root root system_u:object_r:mnt_t:s0 mnt drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 mnt2 drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 mounts drwxr-xr-x. root root system_u:object_r:autofs_t:s0 net drwxr-xr-x. root root system_u:object_r:usr_t:s0 opt dr-xr-xr-x. root root system_u:object_r:proc_t:s0 proc drwxr-xr-x. root root unconfined_u:object_r:default_t:s0 repo dr-xr-x---. root root system_u:object_r:admin_home_t:s0 root dr-xr-xr-x. root root system_u:object_r:bin_t:s0 sbin drwxr-xr-x. root root system_u:object_r:security_t:s0 selinux drwxr-xr-x. root root system_u:object_r:var_t:s0 srv -rw-r--r--. root root unconfined_u:object_r:swapfile_t:s0 swapfile drwxr-xr-x. root root system_u:object_r:sysfs_t:s0 sys drwxrwxrwt. root root system_u:object_r:tmp_t:s0 tmp -rw-r--r--. root root unconfined_u:object_r:etc_runtime_t:s0 tmp2.tar -rw-r--r--. root root unconfined_u:object_r:etc_runtime_t:s0 tmp.tar drwxr-xr-x. root root system_u:object_r:usr_t:s0 usr drwxr-xr-x. root root system_u:object_r:var_t:s0 var
In the listing above, you can see the complete context for all
directories. It consists of a user, a role, and a type. The s0 setting
indicates the security level in Multi Level Security environments. These
environments are not discussed here. In such an environment, make sure
that s0 is set. The Context Type defines what kind of activity is
permitted in the directory. Compare, for example, the
/root directory, which has the
admin_home_t context type, and the
/home directory, which has the
home_root_t context type. In the SELinux policy,
different kinds of access are defined for these context types.
Security labels are not only associated with files, but also with other
items, such as ports and processes. In
Example 30.6: “Showing SELinux settings for processes with ps Zaux”
for example you can see the context settings for processes on your
server.
ps Zaux #tux >sudops Zaux LABEL USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND system_u:system_r:init_t root 1 0.0 0.0 10640 808 ? Ss 05:31 0:00 init [5] system_u:system_r:kernel_t root 2 0.0 0.0 0 0 ? S 05:31 0:00 [kthreadd] system_u:system_r:kernel_t root 3 0.0 0.0 0 0 ? S 05:31 0:00 [ksoftirqd/0] system_u:system_r:kernel_t root 6 0.0 0.0 0 0 ? S 05:31 0:00 [migration/0] system_u:system_r:kernel_t root 7 0.0 0.0 0 0 ? S 05:31 0:00 [watchdog/0] system_u:system_r:sysadm_t root 2344 0.0 0.0 27640 852 ? Ss 05:32 0:00 /usr/sbin/mcelog --daemon --config-file /etc/mcelog/mcelog.conf system_u:system_r:sshd_t root 3245 0.0 0.0 69300 1492 ? Ss 05:32 0:00 /usr/sbin/sshd -o PidFile=/var/run/sshd.init.pid system_u:system_r:cupsd_t root 3265 0.0 0.0 68176 2852 ? Ss 05:32 0:00 /usr/sbin/cupsd system_u:system_r:nscd_t root 3267 0.0 0.0 772876 1380 ? Ssl 05:32 0:00 /usr/sbin/nscd system_u:system_r:postfix_master_t root 3334 0.0 0.0 38320 2424 ? Ss 05:32 0:00 /usr/lib/postfix/master system_u:system_r:postfix_qmgr_t postfix 3358 0.0 0.0 40216 2252 ? S 05:32 0:00 qmgr -l -t fifo -u system_u:system_r:crond_t root 3415 0.0 0.0 14900 800 ? Ss 05:32 0:00 /usr/sbin/cron system_u:system_r:fsdaemon_t root 3437 0.0 0.0 16468 1040 ? S 05:32 0:00 /usr/sbin/smartd system_u:system_r:sysadm_t root 3441 0.0 0.0 66916 2152 ? Ss 05:32 0:00 login -- root system_u:system_r:sysadm_t root 3442 0.0 0.0 4596 800 tty2 Ss+ 05:32 0:00 /sbin/mingetty tty2
In SELinux, three different modes can be used:
This is the default mode. SELinux protects your server according to the rules in the policy, and SELinux logs all of its activity to the audit log.
This mode is useful for troubleshooting. If set to Permissive, SELinux does not protect your server, but it still logs everything that happens to the log files.
In this mode, SELinux is switched off completely and no logging occurs. The file system labels however are not removed from the file system.
You have already read how you can set the current SELinux mode from GRUB 2 while booting using the enforcing boot parameter.
An important part of the work of an administrator is setting context types on files to ensure appropriate working of SELinux.
If a file is created within a specific directory, it inherits the context type of the parent directory by default. If, however, a file is moved from one location to another location, it retains the context type that it had in the old location.
To set the context type for files, you can use the semanage
fcontext command. With this command, you write the new context
type to the policy, but it does not change the actual context type
immediately! To apply the context types that are in the policy, you need
to run the restorecon command afterward.
The challenge when working with semanage fcontext is
to find out which context you actually need. You can use
tux >sudosemanage fcontext -l
to list all contexts in the policy, but it may be a bit hard to find out the actual context you need from that list as it is rather long (see Example 30.7: “Viewing Default File Contexts”).
tux >sudosemanage fcontext -l | less SELinux fcontext type Context / directory system_u:object_r:root_t:s0 /.* all files system_u:object_r:default_t:s0 /[^/]+ regular file system_u:object_r:etc_runtime_t:s0 /\.autofsck regular file system_u:object_r:etc_runtime_t:s0 /\.autorelabel regular file system_u:object_r:etc_runtime_t:s0 /\.journal all files X:>>None>> /\.suspended regular file system_u:object_r:etc_runtime_t:s0 /a?quota\.(user|group) regular file system_u:object_r:quota_db_t:s0 /afs directory system_u:object_r:mnt_t:s0 /bin directory system_u:object_r:bin_t:s0 /bin/.* all files system_u:object_r:bin_t:s0
There are three ways to find out which context settings are available for your services:
Install the service and look at the default context settings that are used. This is the easiest and recommended option.
Consult the man page for the specific service. Some services have a
man page that ends in _selinux, which contains all
the information you need to find the correct context settings.
When you have found the right context setting, apply it using
semanage fcontext. This command takes
-t context type as its first argument, followed by
the name of the directory or file to which you want to apply the
context settings. To apply the context to everything that already
exists in the directory where you want to apply the context, you add
the regular expression (/.*)? to the name of the
directory. This means: optionally, match a slash followed by any
character. The examples section of the semanage man
page has some useful usage examples for semanage.
For more information on regular expressions, see for example the
tutorial at http://www.regular-expressions.info/.
Display a list of all context types that are available on your system:
tux >sudoseinfo -t
Since the command by itself outputs an overwhelming amount of
information, it should be used in combination with
grep or a similar command for filtering.
To help you apply the SELinux context properly, the following
procedure shows how to set a context using semanage
fcontext and restorecon. You will
notice that at first attempt, the Web server with a non-default
document root does not work. After changing the SELinux context,
it will:
Create the /web directory and then change to it:
tux >sudomkdir /web && cd /web
Use a text editor to create the file
/web/index.html that contains the text welcome to
my Web site.
Open the file /etc/apache2/default-server.conf
with an editor, and change the DocumentRoot line to
DocumentRoot /web
Start the Apache Web server:
tux >sudosystemctl start apache2
Open a session to your local Web server:
tux > w3m localhost
You will receive a Connection refused message.
Press Enter, and then q to
quit w3m.
Find the current context type for the default Apache
DocumentRoot, which is
/srv/www/htdocs. It should be set to
httpd_sys_content_t:
tux >sudols -Z /srv/www
Set the new context in the policy and press Enter:
tux >sudosemanage fcontext -a -f "" -t httpd_sys_content_t '/web(/.*) ?'
Apply the new context type:
tux >sudorestorecon /web
Show the context of the files in the directory
/web. You will see that the new context type has
been set properly to the /web directory, but not
to its contents.
tux >sudols -Z /web
Apply the new context recursively to the /web
directory. The type context has now been set correctly.
tux >sudorestorecon -R /web
Restart the Web server:
tux >sudosystemctl restart apache2
You should now be able to access the contents of the
/web directory.
The easiest way to change the behavior of the policy is by working with Booleans. These are on-off switches that you can use to change the settings in the policy. To find out which Booleans are available, run
tux >sudosemanage boolean -l
It will show a long list of Booleans, with a short description of
what each of these Booleans will do for you. When you have found the
Boolean you want to set, you can use setsebool -P,
followed by the name of the Boolean that you want to change. It is
important to use the -P option at all times when using
setsebool. This option writes the setting to the
policy file on disk, and this is the only way to make sure that the
Boolean is applied automatically after a reboot.
The procedure below gives an example of changing Boolean settings
List Booleans that are related to FTP servers.
tux >sudosemanage boolean -l | grep ftp
Turn the Boolean off:
tux >sudosetsebool allow_ftpd_anon_write off
Note that it does not take much time to write the change. Then verify that the Boolean is indeed turned off:
tux >sudosemanage boolean -l|grep ftpd_anon
Reboot your server.
Check again to see if the allow_ftpd_anon_write
Boolean is still turned on. As it has not yet been written to the
policy, you will notice that it is off.
Switch the Boolean and write the setting to the policy:
tux >sudosetsebool -P allow_ftpd_anon_write
By default, SELinux uses a modular policy. This means that the
policy that implements SELinux features is not just one huge policy, but
it consists of many smaller modules. Each module covers a specific part
of the SELinux configuration. The concept of the SELinux module was
introduced to make it easier for third party vendors to make their
services compatible with SELinux. To get an overview of the SELinux
modules, you can use the semodule -l command. This
command lists all current modules in use by SELinux and their
version numbers.
As an administrator, you can switch modules on or off. This can be useful if you want to disable only a part of SELinux and not everything to run a specific service without SELinux protection. Especially in the case of openSUSE Leap, where there is not a completely supported SELinux policy yet, it can make sense to switch off all modules that you do not need so that you can focus on the services that really do need SELinux protection. To switch off an SELinux module, use
tux >sudosemodule -d MODULENAME
To switch it on again, you can use
tux >sudosemodule -e modulename
As an administrator, you do not typically change the contents of the
policy files that come from the SELinux Policy RPM. You would rather use
semanage fcontext to change file contexts. If you are
using audit2allow to generate policies for your
server, you should change the policy files after all.
To change the contents of any of the policy module files,
compile the changes into a new policy module file. To do this,
first install the selinux-policy-devel package.
Then, in the directory where the files created by
audit2allow are located, run:
tux > make -f /usr/share/selinux/devel/Makefile
When make has completed, you can manually load the
modules into the system, using semodule -i.
By default, if SELinux is the reason something is not working, a log
message to this effect is sent to the
/var/log/audit/audit.log file. That is, if the
auditd service is running. If you see an empty
/var/log/audit, start the auditd service using
tux >sudosystemctl start auditd
and enable it in the targets of your system, using
tux >sudosystemctl enable auditd
In
Example 30.8: “Example Lines from /etc/audit/audit.log”
you can see a partial example of the contents of
/var/log/audit/audit.log
/etc/audit/audit.log #type=DAEMON_START msg=audit(1348173810.874:6248): auditd start, ver=1.7.7 format=raw kernel=3.0.13-0.27-default auid=0 pid=4235 subj=system_u:system_r:auditd_t res=success
type=AVC msg=audit(1348173901.081:292): avc: denied { write } for pid=3426 comm="smartd" name="smartmontools" dev=sda6 ino=581743 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=dir
type=AVC msg=audit(1348173901.081:293): avc: denied { remove_name } for pid=3426 comm="smartd" name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state~" dev=sda6 ino=582390 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=dir
type=AVC msg=audit(1348173901.081:294): avc: denied { unlink } for pid=3426 comm="smartd" name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state~" dev=sda6 ino=582390 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:295): avc: denied { rename } for pid=3426 comm="smartd" name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" dev=sda6 ino=582373 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:296): avc: denied { add_name } for pid=3426 comm="smartd" name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state~" scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=dir
type=AVC msg=audit(1348173901.081:297): avc: denied { create } for pid=3426 comm="smartd" name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:298): avc: denied { write open } for pid=3426 comm="smartd" name="smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" dev=sda6 ino=582390 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.081:299): avc: denied { getattr } for pid=3426 comm="smartd" path="/var/lib/smartmontools/smartd.WDC_WD2500BEKT_75PVMT0-WD_WXC1A21E0454.ata.state" dev=sda6 ino=582390 scontext=system_u:system_r:fsdaemon_t tcontext=system_u:object_r:var_lib_t tclass=file
type=AVC msg=audit(1348173901.309:300): avc: denied { append } for pid=1316
At first look, the lines in audit.log are a bit hard
to read. However, on closer examination they are not that hard to
understand. Every line can be broken down into sections. For example, the
sections in the last line are:
type=AVC:
every SELinux-related audit log line starts with the type
identification type=AVC
msg=audit(1348173901.309:300):
This is the time stamp, which unfortunately is written in epoch time,
the number of seconds that have passed since Jan 1, 1970. You can use
date -d on the part up to the dot in the epoch time
notation to find out when the event has happened:
tux > date -d @1348173901
Thu Sep 20 16:45:01 EDT 2012avc: denied { append }:the specific action that was denied. In this case the system has denied the appending of data to a file. While browsing through the audit log file, you can see other system actions, such as write open, getattr and more.
for pid=1316:the process ID of the command or process that initiated the action
comm="rsyslogd":the specific command that was associated with that PID
name="smartmontools":the name of the subject of the action
dev=sda6 ino=582296:the block device and inode number of the file that was involved
scontext=system_u:system_r:syslogd_t:the source context, which is the context of the initiator of the action
tclass=file:a class identification of the subject
Instead of interpreting the events in audit.log yourself, there is
another approach. You can use the audit2allow command,
which helps analyze the cryptic log messages in
/var/log/audit/audit.log. An audit2allow
troubleshooting session always consists of three different commands.
First, you would use audit2allow -w -a to present the
audit information in a more readable way. The audit2allow -w
-a by default works on the audit.log file. If you want to
analyze a specific message in the audit.log file, copy it to a temporary
file and analyze the file with:
tux >sudoaudit2allow -w -i FILENAME
tux >sudoaudit2allow -w -i testfile type=AVC msg=audit(1348173901.309:300): avc: denied { append } for pid=1316 comm="rsyslogd" name="acpid" dev=sda6 ino=582296 scontext=system_u:system_r:syslogd_t tcontext=system_u:object_r:apmd_log_t tclass=file
Missing type enforcement (TE) allow rule.
To generate a loadable module to allow this access, run
tux >sudoaudit2allow
To find out which specific rule has denied access, you can use
audit2allow -a to show the enforcing rules from all
events that were logged to the audit.log file, or
audit2allow -i FILENAME to
show it for messages that you have stored in a specific file:
tux >sudoaudit2allow -i testfile #============= syslogd_t ============== allow syslogd_t apmd_log_t:file append;
To create an SELinux module with the name mymodule
that you can load to allow the access that was previously denied, run
tux >sudoaudit2allow -a -R -M mymodule
If you want to do this for all events that have been logged to the
audit.log, use the -a -M command arguments. To do it
only for specific messages that are in a specific file, use -i
-M as in the example below:
tux >sudoaudit2allow -i testfile -M example ******************** IMPORTANT *********************** To make this policy package active, execute: semodule -i example.pp
As indicated by the audit2allow command, you can now
run this module by using the semodule -i command,
followed by the name of the module that audit2allow
has created for you (example.pp in the above
example).
The Linux audit framework as shipped with this version of openSUSE Leap provides a CAPP-compliant (Controlled Access Protection Profiles) auditing system that reliably collects information about any security-relevant event. The audit records can be examined to determine whether any violation of the security policies has been committed, and by whom.
Providing an audit framework is an important requirement for a CC-CAPP/EAL (Common Criteria-Controlled Access Protection Profiles/Evaluation Assurance Level) certification. Common Criteria (CC) for Information Technology Security Information is an international standard for independent security evaluations. Common Criteria helps customers judge the security level of any IT product they intend to deploy in mission-critical setups.
Common Criteria security evaluations have two sets of evaluation requirements, functional and assurance requirements. Functional requirements describe the security attributes of the product under evaluation and are summarized under the Controlled Access Protection Profiles (CAPP). Assurance requirements are summarized under the Evaluation Assurance Level (EAL). EAL describes any activities that must take place for the evaluators to be confident that security attributes are present, effective, and implemented. Examples for activities of this kind include documenting the developers' search for security vulnerabilities, the patch process, and testing.
This guide provides a basic understanding of how audit works and how it can be set up. For more information about Common Criteria itself, refer to the Common Criteria Web site.
This chapter shows how to set up a simple audit scenario. Every step involved in configuring and enabling audit is explained in detail. After you have learned to set up audit, consider a real-world example scenario in Chapter 33, Introducing an Audit Rule Set.
The following example configuration illustrates how audit can be used to monitor your system. It highlights the most important items that need to be audited to cover the list of auditable events specified by Controlled Access Protection Profile (CAPP).
There are other resources available containing valuable information about the Linux audit framework:
The Linux audit framework as shipped with this version of openSUSE Leap provides a CAPP-compliant (Controlled Access Protection Profiles) auditing system that reliably collects information about any security-relevant event. The audit records can be examined to determine whether any violation of the security policies has been committed, and by whom.
Providing an audit framework is an important requirement for a CC-CAPP/EAL (Common Criteria-Controlled Access Protection Profiles/Evaluation Assurance Level) certification. Common Criteria (CC) for Information Technology Security Information is an international standard for independent security evaluations. Common Criteria helps customers judge the security level of any IT product they intend to deploy in mission-critical setups.
Common Criteria security evaluations have two sets of evaluation requirements, functional and assurance requirements. Functional requirements describe the security attributes of the product under evaluation and are summarized under the Controlled Access Protection Profiles (CAPP). Assurance requirements are summarized under the Evaluation Assurance Level (EAL). EAL describes any activities that must take place for the evaluators to be confident that security attributes are present, effective, and implemented. Examples for activities of this kind include documenting the developers' search for security vulnerabilities, the patch process, and testing.
This guide provides a basic understanding of how audit works and how it can be set up. For more information about Common Criteria itself, refer to the Common Criteria Web site.
auditctlausearchautraceLinux audit helps make your system more secure by providing you with a means to analyze what is happening on your system in great detail. It does not, however, provide additional security itself—it does not protect your system from code malfunctions or any kind of exploits. Instead, audit is useful for tracking these issues and helps you take additional security measures, like AppArmor, to prevent them.
Audit consists of several components, each contributing crucial
functionality to the overall framework. The audit kernel module intercepts
the system calls and records the relevant events. The
auditd daemon writes the audit
reports to disk. Various command line utilities take care of displaying,
querying, and archiving the audit trail.
Audit enables you to do the following:
Audit maps processes to the user ID that started them. This makes it possible for the administrator or security officer to exactly trace which user owns which process and is potentially doing malicious operations on the system.
Audit does not handle the renaming of UIDs. Therefore avoid renaming
UIDs (for example, changing tux from
uid=1001 to uid=2000) and
obsolete UIDs rather than renaming them. Otherwise you would need to
change auditctl data (audit rules) and would have
problems retrieving old data correctly.
Linux audit provides tools that write the audit reports to disk and translate them into human readable format.
Audit provides a utility that allows you to filter the audit reports for certain events of interest. You can filter for:
User
Group
Audit ID
Remote Host Name
Remote Host Address
System Call
System Call Arguments
File
File Operations
Success or Failure
Audit provides the means to filter the audit reports for events of interest and to tune audit to record only selected events. You can create your own set of rules and have the audit daemon record only those of interest to you.
Audit reports are owned by root and therefore only removable
by root. Unauthorized users cannot remove the audit logs.
If the kernel runs out of memory, the audit daemon's backlog is exceeded, or its rate limit is exceeded, audit can trigger a shutdown of the system to keep events from escaping audit's control. This shutdown would be an immediate halt of the system triggered by the audit kernel component without synchronizing the latest logs to disk. The default configuration is to log a warning to syslog rather than to halt the system.
If the system runs out of disk space when logging, the audit system can be configured to perform clean shutdown. The default configuration tells the audit daemon to stop logging when it runs out of disk space.
The following figure illustrates how the various components of audit interact with each other:
Straight arrows represent the data flow between components while dashed arrows represent lines of control between components.
The audit daemon is responsible for writing the audit messages that
were generated through the audit kernel interface and triggered by
application and system activity to disk. The way the audit daemon is
started is controlled by systemd. The audit system functions
(when started) are controlled by
/etc/audit/auditd.conf. For more information
about auditd and its
configuration, refer to Section 31.2, “Configuring the Audit Daemon”.
auditctl
The auditctl utility controls the audit system. It
controls the log generation parameters and kernel settings of the
audit interface and the rule sets that determine which events
are tracked. For more information about auditctl,
refer to Section 31.3, “Controlling the Audit System Using auditctl”.
The file /etc/audit/audit.rules contains a
sequence of auditctl commands that are loaded at
system boot time immediately after the audit daemon is started. For
more information about audit rules, refer to
Section 31.4, “Passing Parameters to the Audit System”.
The aureport utility allows you to create custom
reports from the audit event log. This report generation can easily be
scripted, and the output can be used by various other applications,
for example, to plot these results. For more information about
aureport, refer to
Section 31.5, “Understanding the Audit Logs and Generating Reports”.
The ausearch utility can search the audit log file
for certain events using various keys or other characteristics of the
logged format. For more information about ausearch,
refer to Section 31.6, “Querying the Audit Daemon Logs with ausearch”.
The audit dispatcher daemon
(audispd) can be used to relay
event notifications to other applications instead of (or in addition
to) writing them to disk in the audit log. For more information about
audispd, refer to
Section 31.9, “Relaying Audit Event Notifications”.
The autrace utility traces individual processes in
a fashion similar to strace. The output of
autrace is logged to the audit log. For more
information about autrace, refer to
Section 31.7, “Analyzing Processes with autrace”.
Prints a list of the last logged-in users, similarly to
last. aulast searches back
through the audit logs (or the given audit log file) and displays a
list of all users logged in and out based on the range of time in the
audit logs.
Prints the last login for all users of a machine similar to the way
lastlog does. The login name, port, and last login
time will be printed.
Before you can actually start generating audit logs and processing them,
configure the audit daemon itself.
The /etc/audit/auditd.conf configuration file
determines how the audit system functions when the daemon has been
started. For most use cases, the default settings shipped with
openSUSE Leap should suffice. For CAPP environments, most of these
parameters need tweaking. The following list briefly introduces the
parameters available:
log_file = /var/log/audit/audit.log log_format = RAW log_group = root priority_boost = 4 flush = INCREMENTAL freq = 20 num_logs = 5 disp_qos = lossy dispatcher = /sbin/audispd name_format = NONE ##name = mydomain max_log_file = 6 max_log_file_action = ROTATE space_left = 75 space_left_action = SYSLOG action_mail_acct = root admin_space_left = 50 admin_space_left_action = SUSPEND disk_full_action = SUSPEND disk_error_action = SUSPEND ##tcp_listen_port = tcp_listen_queue = 5 tcp_max_per_addr = 1 ##tcp_client_ports = 1024-65535 tcp_client_max_idle = 0 cp_client_max_idle = 0
Depending on whether you want your environment to satisfy the requirements of CAPP, you need to be extra restrictive when configuring the audit daemon. Where you need to use particular settings to meet the CAPP requirements, a “CAPP Environment” note tells you how to adjust the configuration.
log_file, log_format and
log_group
log_file specifies the location where the audit
logs should be stored. log_format determines how
the audit information is written to disk and
log_group defines the group that owns the log
files. Possible values for log_format are
raw (messages are stored exactly as the kernel
sends them) or nolog (messages are discarded and
not written to disk). The data sent to the audit dispatcher is not
affected if you use the nolog mode. The default
setting is raw and you should keep it if you want
to be able to create reports and queries against the audit logs using
the aureport and ausearch tools.
The value for log_group can either be specified
literally or using the group's ID.
In a CAPP environment, have the audit log reside on its own partition. By doing so, you can be sure that the space detection of the audit daemon is accurate and that you do not have other processes consuming this space.
priority_boost
Determine how much of a priority boost the audit daemon should get. Possible values are 0 to 20. The resulting nice value calculates like this: 0 - priority_boost
flush and freq
Specifies whether, how, and how often the audit logs should be written
to disk. Valid values for flush are
none, incremental,
data, and sync.
none tells the audit daemon not to make any special
effort to write the audit data to disk. incremental
tells the audit daemon to explicitly flush the data to disk. A
frequency must be specified if incremental is used.
A freq value of 20 tells the
audit daemon to request that the kernel flush the data to disk after
every 20 records. The data option keeps the data
portion of the disk file synchronized at all times while the
sync option takes care of both metadata and data.
In a CAPP environment, make sure that the audit trail is always fully
up to date and complete. Therefore, use sync or
data with the flush parameter.
num_logs
Specify the number of log files to keep if you have given
rotate as the
max_log_file_action. Possible values range from
0 to 99. A value less than
2 means that the log files are not rotated.
As you increase the number of files to rotate, you increase the amount
of work required of the audit daemon. While doing this rotation,
auditd cannot always service
new data arriving from the kernel as quickly, which can result
in a backlog condition (triggering
auditd to react according to
the failure flag, described in Section 31.3, “Controlling the Audit System Using auditctl”).
In this situation, increasing the backlog limit is recommended. Do so
by changing the value of the -b parameter in the
/etc/audit/audit.rules file.
disp_qos and dispatcher
The dispatcher is started by the audit daemon during its start. The
audit daemon relays the audit messages to the application specified in
dispatcher. This application must be a highly
trusted one, because it needs to run as root.
disp_qos determines whether you allow for
lossy or lossless communication
between the audit daemon and the dispatcher.
If you select lossy, the audit daemon might discard
some audit messages when the message queue is full. These events still
get written to disk if log_format is set to
raw, but they might not get through to the
dispatcher. If you select lossless the audit
logging to disk is blocked until there is an empty spot in the message
queue. The default value is lossy.
name_format and name
name_format controls how computer names are
resolved. Possible values are none (no name will be
used), hostname (value returned by gethostname),
fqd (fully qualified host name as received through
a DNS lookup), numeric (IP address) and
user. user is a custom string
that needs to be defined with the name parameter.
max_log_file and max_log_file_action
max_log_file takes a numerical value that specifies
the maximum file size in megabytes that the log file can reach before
a configurable action is triggered. The action to be taken is
specified in max_log_file_action. Possible values
for max_log_file_action are
ignore, syslog,
suspend, rotate, and
keep_logs. ignore tells the
audit daemon to do nothing when the size limit is reached,
syslog tells it to issue a warning and send it to
syslog, and suspend causes the audit daemon to stop
writing logs to disk, leaving the daemon itself still alive.
rotate triggers log rotation using the
num_logs setting. keep_logs also
triggers log rotation, but does not use the num_log
setting, so always keeps all logs.
To keep a complete audit trail in CAPP environments, the
keep_logs option should be used. If using a
separate partition to hold your audit logs, adjust
max_log_file and num_logs to
use the entire space available on that partition. Note that the more
files that need to be rotated, the longer it takes to get back to
receiving audit events.
space_left and space_left_action
space_left takes a numerical value in megabytes of
remaining disk space that triggers a configurable action by the audit
daemon. The action is specified in
space_left_action. Possible values for this
parameter are ignore, syslog,
email, exec,
suspend, single, and
halt. ignore tells the audit
daemon to ignore the warning and do nothing, syslog
has it issue a warning to syslog, and email sends
an e-mail to the account specified under
action_mail_acct. exec plus a
path to a script executes the given script. Note that it is not
possible to pass parameters to the script. suspend
tells the audit daemon to stop writing to disk but remain alive while
single triggers the system to be brought down to
single user mode. halt triggers a full shutdown of
the system.
Make sure that space_left is set to a value that
gives the administrator enough time to react to the alert and allows
it to free enough disk space for the audit daemon to continue to
work. Freeing disk space would involve calling aureport
-t and archiving the oldest logs on a separate archiving
partition or resource. The actual value for
space_left depends on the size of your deployment.
Set space_left_action to email.
action_mail_acct
Specify an e-mail address or alias to which any alert messages should
be sent. The default setting is root, but you can
enter any local or remote account as long as e-mail and the network
are properly configured on your system and
/usr/lib/sendmail exists.
admin_space_left and admin_space_left_action
admin_space_left takes a numerical value in
megabytes of remaining disk space. The system is already running low
on disk space when this limit is reached and the administrator has one
last chance to react to this alert and free disk space for the audit
logs. The value of admin_space_left should be lower
than the value for space_left. The possible values
for admin_space_left_action are the same as for
space_left_action.
Set admin_space_left to a value that would allow
the administrator's actions to be recorded. The action should be set
to single.
disk_full_action
Specify which action to take when the system runs out of disk space for
the audit logs. Valid values are ignore,
syslog, rotate,
exec, suspend,
single, and halt. For an
explanation of these values refer to space_left and space_left_action
.
As the disk_full_action is triggered when there is
absolutely no more room for any audit logs, you should bring the
system down to single-user mode (single) or shut
it down completely (halt).
disk_error_action
Specify which action to take when the audit daemon encounters any kind
of disk error while writing the logs to disk or rotating the logs. The
possible value are the same as for
space_left_action.
Use syslog, single, or
halt depending on your site's policies regarding
the handling of any kind of hardware failure.
tcp_listen_port, tcp_listen_queue,
tcp_client_ports, tcp_client_max_idle, and
tcp_max_per_addr
The audit daemon can receive audit events from other audit daemons.
The tcp parameters let you control incoming connections. Specify a
port between 1 and 65535 with tcp_listen_port on
which the auditd will listen.
tcp_listen_queue lets you configure a maximum value
for pending connections. Make sure not to set a value too small, since
the number of pending connections may be high under certain
circumstances, such as after a power outage.
tcp_client_ports defines which client ports are
allowed. Either specify a single port or a port range with numbers
separated by a dash (for example 1-1023 for all privileged ports).
Specifying a single allowed client port may make it difficult for the
client to restart their audit subsystem, as it will be unable to
re-create a connection with the same host addresses and ports until
the connection closure TIME_WAIT state times out. If a client does not
respond anymore, auditd
complains. Specify the number of seconds after which this will happen
with tcp_client_max_idle. Keep in mind that this
setting is valid for all clients and therefore should be higher than
any individual client heartbeat setting, preferably by a factor of
two. tcp_max_per_addr is a numeric value
representing how many concurrent connections from one IP address are
allowed.
We recommend using privileged ports for client and server to prevent non-root (CAP_NET_BIND_SERVICE) programs from binding to those ports.
When the daemon configuration in
/etc/audit/auditd.conf is complete, the next step is
to focus on controlling the amount of auditing the daemon does, and to
assign sufficient resources and limits to the daemon so it can operate
smoothly.
auditctl #
auditctl is responsible for controlling the status and
some basic system parameters of the audit daemon. It controls the amount
of auditing performed on the system. Using audit rules,
auditctl controls which components of your system are
subjected to the audit and to what extent they are audited. Audit rules
can be passed to the audit daemon on the auditctl
command line or by composing a rule set and instructing the audit
daemon to process this file. By default, the
auditd daemon is configured to
check for audit rules under /etc/audit/audit.rules.
For more details on audit rules, refer to
Section 31.4, “Passing Parameters to the Audit System”.
The main auditctl commands to control basic audit
system parameters are:
auditctl -e to enable or disable
audit
auditctl -f to control the failure
flag
auditctl -r to control the rate
limit for audit messages
auditctl -b to control the backlog
limit
auditctl -s to query the current
status of the audit daemon
Before running auditctl -S on your system, add
-F arch=b64 to prevent the architecture mismatch
warning.
The -e, -f, -r, and
-b options can also be specified in the
audit.rules file to avoid having to enter them each
time the audit daemon is started.
Any time you query the status of the audit daemon with
auditctl -s or change the status flag
with auditctl
-eFLAG, a status message
(including information on each of the above-mentioned parameters) is
printed. The following example highlights the typical audit status
message.
auditctl -s #AUDIT_STATUS: enabled=1 flag=2 pid=3105 rate_limit=0 backlog_limit=8192 lost=0 backlog=0
|
Flag |
Meaning [Possible Values] |
Command |
|---|---|---|
|
|
Set the enable flag. [0..2] 0=disable, 1=enable, 2=enable and lock down the configuration |
|
|
|
Set the failure flag. [0..2] 0=silent, 1=printk, 2=panic (immediate halt without synchronizing pending data to disk) |
|
|
|
Process ID under which
|
— |
|
|
Set a limit in messages per second. If the rate is not zero and is exceeded, the action specified in the failure flag is triggered. |
|
|
|
Specify the maximum number of outstanding audit buffers allowed. If all buffers are full, the action specified in the failure flag is triggered. |
|
|
|
Count the current number of lost audit messages. |
— |
|
|
Count the current number of outstanding audit buffers. |
— |
Commands to control the audit system can be invoked individually from the
shell using auditctl or batch read from a file using
auditctl - R. This latter method is
used by the init scripts to load rules from the file
/etc/audit/audit.rules after the audit daemon has
been started. The rules are executed in order from top to bottom. Each of
these rules would expand to a separate auditctl
command. The syntax used in the rules file is the same as that used for
the auditctl command.
Changes made to the running audit system by executing
auditctl on the command line are not persistent across
system restarts. For changes to persist, add them to the
/etc/audit/audit.rules file and, if they are not
currently loaded into audit, restart the audit system to load the
modified rule set by using the systemctl restart
auditd command.
-b 10001 -f 12 -r 103 -e 14
Specify the maximum number of outstanding audit buffers. Depending on the level of logging activity, you might need to adjust the number of buffers to avoid causing too heavy an audit load on your system. | |
Specify the failure flag to use. See Table 31.1, “Audit Status Flags” for possible values. | |
Specify the maximum number of messages per second that may be issued by the kernel. See Table 31.1, “Audit Status Flags” for details. | |
Enable or disable the audit subsystem. |
Using audit, you can track any kind of file system access to important files, configurations or resources. You can add watches on these and assign keys to each kind of watch for better identification in the logs.
-w /etc/shadow1 -w /etc -p rx2 -w /etc/passwd -k fk_passwd -p rwxa3
The | |
This rule adds a watch to the | |
This rule adds a file watch to |
System call auditing lets you track your system's behavior on a level even below the application level. When designing these rules, consider that auditing a great many system calls may increase your system load and cause you to run out of disk space. Consider carefully which events need tracking and how they can be filtered to be even more specific.
-a exit,always -S mkdir1 -a exit,always -S access -F a1=42 -a exit,always -S ipc -F a0=23 -a exit,always -S open -F success!=04 -a task,always -F auid=05 -a task,always -F uid=0 -F auid=501 -F gid=wheel6
This rule activates auditing for the | |
This rule adds auditing to the access system call, but only if the
second argument of the system call ( | |
This rule adds an audit context to the IPC multiplexed system call. The
specific | |
This rule audits failed attempts to call open. | |
This rule is an example of a task rule (keyword:
| |
This last rule makes heavy use of filters. All filter options are
combined with a logical AND operator, meaning that this rule applies to
all tasks that carry the audit ID of |
For more details on filtering system call arguments, refer to Section 33.6, “Filtering System Call Arguments”.
You cannot only add rules to the audit system, but also remove them. There are different methods for deleting the entire rule set at once or for deleting system call rules or file and directory watches:
-D1 -d exit,always -S mkdir2 -W /etc3
Clear the queue of audit rules and delete any preexisting rules. This
rule is used as the first rule in
| |
This rule deletes a system call rule. The | |
This rule tells audit to discard the rule with the directory watch on
|
To get an overview of which rules are currently in use in your audit
setup, run auditctl -l. This command
displays all rules with one rule per line.
auditctl -l #exit,always watch=/etc perm=rx exit,always watch=/etc/passwd perm=rwxa key=fk_passwd exit,always watch=/etc/shadow perm=rwxa exit,always syscall=mkdir exit,always a1=4 (0x4) syscall=access exit,always a0=2 (0x2) syscall=ipc exit,always success!=0 syscall=open
You can build very sophisticated audit rules by using the various filter
options. Refer to the auditctl(8) man page for more
information about the options available for building audit filter rules,
and audit rules in general.
To understand what the aureport utility does, it is
vital to know how the logs generated by the audit daemon are structured,
and what exactly is recorded for an event. Only then can you decide which
report types are most appropriate for your needs.
The following examples highlight two typical events that are logged by
audit and how their trails in the audit log are read. The audit log or
logs (if log rotation is enabled) are stored in the
/var/log/audit directory. The first example is a
simple less command. The second example covers a
great deal of PAM activity in the logs when a user tries to remotely log
in to a machine running audit.
type=SYSCALL msg=audit(1234874638.599:5207): arch=c000003e syscall=2 success=yes exit=4 a0=62fb60 a1=0 a2=31 a3=0 items=1 ppid=25400 pid =25616 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=1164 comm="less" exe="/usr/bin/less" key="doc_log" type=CWD msg=audit(1234874638.599:5207): cwd="/root" type=PATH msg=audit(1234874638.599:5207): item=0 name="/var/log/audit/audit.log" inode=1219041 dev=08:06 mode=0100644 ouid=0 ogid=0 rdev=00:00
The above event, a simple less
/var/log/audit/audit.log, wrote three messages to the log. All
of them are closely linked together and you would not be able to make
sense of one of them without the others. The first message reveals the
following information:
type
The type of event recorded. In this case, it assigns the
SYSCALL type to an event triggered by a system
call. The CWD event was recorded to record the
current working directory at the time of the syscall. A
PATH event is generated for each path passed to
the system call. The open system call takes only one path argument,
so only generates one PATH event. It is important
to understand that the PATH event reports the path
name string argument without any further interpretation, so a
relative path requires manual combination with the path reported by
the CWD event to determine the object accessed.
msg
A message ID enclosed in brackets. The ID splits into two parts. All
characters before the : represent a Unix epoch
time stamp. The number after the colon represents the actual event
ID. All events that are logged from one application's system call
have the same event ID. If the application makes a second system
call, it gets another event ID.
arch
References the CPU architecture of the system call. Decode this
information using the -i option on any of your
ausearch commands when searching the logs.
syscall
The type of system call as it would have been printed by an strace on
this particular system call. This data is taken from the list of
system calls under /usr/include/asm/unistd.h and
may vary depending on the architecture. In this case,
syscall=2 refers to the open system call (see
man open(2)) invoked by the less application.
success
Whether the system call succeeded or failed.
exit
The exit value returned by the system call. For the
open system call used in this example, this is the
file descriptor number. This varies by system call.
a0 to a3
The first four arguments to the system call in numeric form. The
values of these are system call dependent. In this example (an
open system call), the following are used:
a0=62fb60 a1=8000 a2=31 a3=0
a0 is the start address of the passed path name.
a1 is the flags. 8000 in hex
notation translates to 100000 in octal notation,
which in turn translates to O_LARGEFILE.
a2 is the mode, which, because
O_CREAT was not specified, is unused.
a3 is not passed by the open
system call. Check the manual page of the relevant system call to
find out which arguments are used with it.
items
The number of strings passed to the application.
ppid
The process ID of the parent of the process analyzed.
pid
The process ID of the process analyzed.
auid
The audit ID. A process is given an audit ID on user login. This ID
is then handed down to any child process started by the initial
process of the user. Even if the user changes his identity (for
example, becomes root), the audit ID stays the same. Thus
you can always trace actions to the original user who logged in.
uid
The user ID of the user who started the process. In this case,
0 for root.
gid
The group ID of the user who started the process. In this case,
0 for root.
euid, suid, fsuid
Effective user ID, set user ID, and file system user ID of the user that started the process.
egid, sgid, fsgid
Effective group ID, set group ID, and file system group ID of the user that started the process.
tty
The terminal from which the application was started. In this case, a pseudo-terminal used in an SSH session.
ses
The login session ID. This process attribute is set when a user logs in and can tie any process to a particular user login.
comm
The application name under which it appears in the task list.
exe
The resolved path name to the binary program.
subj
auditd records whether the
process is subject to any security context, such as AppArmor.
unconstrained, as in this case, means that the
process is not confined with AppArmor. If the process had been
confined, the binary path name plus the AppArmor profile mode would
have been logged.
key
If you are auditing many directories or files, assign
key strings to each of these watches. You can use these keys with
ausearch to search the logs for events of this
type only.
The second message triggered by the example less call
does not reveal anything apart from the current working directory when
the less command was executed.
The third message reveals the following (the type and
message flags have already been introduced):
item
In this example, item references the
a0 argument—a path—that is
associated with the original SYSCALL message. Had
the original call had more than one path argument (such as a
cp or mv command), an
additional PATH event would have been logged for
the second path argument.
name
Refers to the path name passed as an argument to the open system call.
inode
Refers to the inode number corresponding to name.
dev
Specifies the device on which the file is stored. In this case,
08:06, which stands for
/dev/sda1 or “first partition on the first
IDE device.”
mode
Numerical representation of the file's access permissions. In this
case, root has read and write permissions and his group
(root) has read access while the entire rest of the world
cannot access the file.
ouid and ogid
Refer to the UID and GID of the inode itself.
rdev
Not applicable for this example. The rdev entry
only applies to block or character devices, not to files.
Example 31.8, “An Advanced Audit Event—Login via SSH” highlights the audit events triggered by an incoming SSH connection. Most of the messages are related to the PAM stack and reflect the different stages of the SSH PAM process. Several of the audit messages carry nested PAM messages in them that signify that a particular stage of the PAM process has been reached. Although the PAM messages are logged by audit, audit assigns its own message type to each event:
type=USER_AUTH msg=audit(1234877011.791:7731): user pid=26127 uid=0 1 auid=4294967295 ses=4294967295 msg='op=PAM:authentication acct="root" exe="/usr/sbin/sshd" (hostname=jupiter.example.com, addr=192.168.2.100, terminal=ssh res=success)' type=USER_ACCT msg=audit(1234877011.795:7732): user pid=26127 uid=0 2 auid=4294967295 ses=4294967295 msg='op=PAM:accounting acct="root" exe="/usr/sbin/sshd" (hostname=jupiter.example.com, addr=192.168.2.100, terminal=ssh res=success)' type=CRED_ACQ msg=audit(1234877011.799:7733): user pid=26125 uid=0 3 auid=4294967295 ses=4294967295 msg='op=PAM:setcred acct="root" exe="/usr/sbin/sshd" (hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)' type=LOGIN msg=audit(1234877011.799:7734): login pid=26125 uid=0 old auid=4294967295 new auid=0 old ses=4294967295 new ses=1172 type=USER_START msg=audit(1234877011.799:7735): user pid=26125 uid=0 4 auid=0 ses=1172 msg='op=PAM:session_open acct="root" exe="/usr/sbin/sshd" (hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)' type=USER_LOGIN msg=audit(1234877011.823:7736): user pid=26128 uid=0 5 auid=0 ses=1172 msg='uid=0: exe="/usr/sbin/sshd" (hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)' type=CRED_REFR msg=audit(1234877011.828:7737): user pid=26128 uid=0 6 auid=0 ses=1172 msg='op=PAM:setcred acct="root" exe="/usr/sbin/sshd" (hostname=jupiter.example.com, addr=192.168.2.100, terminal=/dev/pts/0 res=success)'
PAM reports that is has successfully requested user authentication for
| |
PAM reports that it has successfully determined whether the user is authorized to log in. | |
PAM reports that the appropriate credentials to log in have been
acquired and that the terminal changed to a normal terminal
( | |
PAM reports that it has successfully opened a session for
| |
The user has successfully logged in. This event is the one used by
| |
PAM reports that the credentials have been successfully reacquired. |
The raw audit reports stored in the /var/log/audit
directory tend to become very bulky and hard to understand. To more
easily find relevant messages, use the aureport
utility and create custom reports.
The following use cases highlight a few of the possible report types
that you can generate with aureport:
When the audit logs have moved to another machine or when you want to
analyze the logs of several machines on your local machine
without wanting to connect to each of these individually, move the
logs to a local file and have aureport analyze
them locally:
tux >sudoaureport -if myfileSummary Report ====================== Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 14:52:27.971 Selected time for report: 03/02/09 14:13:38 - 17/02/09 14:52:27.971 Number of changes in configuration: 13 Number of changes to accounts, groups, or roles: 0 Number of logins: 6 Number of failed logins: 13 Number of authentications: 7 Number of failed authentications: 573 Number of users: 1 Number of terminals: 9 Number of host names: 4 Number of executables: 17 Number of files: 279 Number of AVC's: 0 Number of MAC events: 0 Number of failed syscalls: 994 Number of anomaly events: 0 Number of responses to anomaly events: 0 Number of crypto events: 0 Number of keys: 2 Number of process IDs: 1211 Number of events: 5320
The above command, aureport without any arguments,
provides only the standard general summary report generated from the
logs contained in myfile. To create more
detailed reports, combine the -if option with any of
the options below. For example, generate a login report that is
limited to a certain time frame:
tux >sudoaureport -l -ts 14:00 -te 15:00 -if myfileLogin Report ============================================ # date time auid host term exe success event ============================================ 1. 17/02/09 14:21:09 root: 192.168.2.100 sshd /usr/sbin/sshd no 7718 2. 17/02/09 14:21:15 0 jupiter /dev/pts/3 /usr/sbin/sshd yes 7724
Some information, such as user IDs, are printed in numeric form. To
convert these into a human-readable text format, add the
-i option to your aureport
command.
If you are interested in the current audit statistics (events,
logins, processes, etc.), run aureport without any
other option.
If you want to break down the overall statistics of plain
aureport to the statistics of failed events, use
aureport --failed:
tux >sudoaureport --failedFailed Summary Report ====================== Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 14:57:35.183 Selected time for report: 03/02/09 14:13:38 - 17/02/09 14:57:35.183 Number of changes in configuration: 0 Number of changes to accounts, groups, or roles: 0 Number of logins: 0 Number of failed logins: 13 Number of authentications: 0 Number of failed authentications: 574 Number of users: 1 Number of terminals: 5 Number of host names: 4 Number of executables: 11 Number of files: 77 Number of AVC's: 0 Number of MAC events: 0 Number of failed syscalls: 994 Number of anomaly events: 0 Number of responses to anomaly events: 0 Number of crypto events: 0 Number of keys: 2 Number of process IDs: 708 Number of events: 1583
If you want to break down the overall statistics of a plain
aureport to the statistics of successful events,
use aureport --success:
tux >sudoaureport --successSuccess Summary Report ====================== Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 15:00:01.535 Selected time for report: 03/02/09 14:13:38 - 17/02/09 15:00:01.535 Number of changes in configuration: 13 Number of changes to accounts, groups, or roles: 0 Number of logins: 6 Number of failed logins: 0 Number of authentications: 7 Number of failed authentications: 0 Number of users: 1 Number of terminals: 7 Number of host names: 3 Number of executables: 16 Number of files: 215 Number of AVC's: 0 Number of MAC events: 0 Number of failed syscalls: 0 Number of anomaly events: 0 Number of responses to anomaly events: 0 Number of crypto events: 0 Number of keys: 2 Number of process IDs: 558 Number of events: 3739
In addition to the dedicated summary reports (main summary and failed
and success summary), use the --summary option with
most of the other options to create summary reports for a particular
area of interest only. Not all reports support this option, however.
This example creates a summary report for user login events:
tux >sudoaureport -u -i --summaryUser Summary Report =========================== total auid =========================== 5640 root 13 tux 3 wilber
To get an overview of the events logged by audit, use the
aureport -e command. This command
generates a numbered list of all events including date, time, event
number, event type, and audit ID.
tux >sudoaureport -e -ts 14:00 -te 14:21 Event Report =================================== # date time event type auid success =================================== 1. 17/02/09 14:20:27 7462 DAEMON_START 0 yes 2. 17/02/09 14:20:27 7715 CONFIG_CHANGE 0 yes 3. 17/02/09 14:20:57 7716 USER_END 0 yes 4. 17/02/09 14:20:57 7717 CRED_DISP 0 yes 5. 17/02/09 14:21:09 7718 USER_LOGIN -1 no 6. 17/02/09 14:21:15 7719 USER_AUTH -1 yes 7. 17/02/09 14:21:15 7720 USER_ACCT -1 yes 8. 17/02/09 14:21:15 7721 CRED_ACQ -1 yes 9. 17/02/09 14:21:15 7722 LOGIN 0 yes 10. 17/02/09 14:21:15 7723 USER_START 0 yes 11. 17/02/09 14:21:15 7724 USER_LOGIN 0 yes 12. 17/02/09 14:21:15 7725 CRED_REFR 0 yes
To analyze the log from a process's point of view, use the
aureport -p command. This command
generates a numbered list of all process events including date, time,
process ID, name of the executable, system call, audit ID, and event
number.
aureport -p
Process ID Report
======================================
# date time pid exe syscall auid event
======================================
1. 13/02/09 15:30:01 32742 /usr/sbin/cron 0 0 35
2. 13/02/09 15:30:01 32742 /usr/sbin/cron 0 0 36
3. 13/02/09 15:38:34 32734 /usr/lib/gdm/gdm-session-worker 0 -1 37
To analyze the audit log from a system call's point of view, use the
aureport -s command. This command
generates a numbered list of all system call events including date,
time, number of the system call, process ID, name of the command that
used this call, audit ID, and event number.
tux >sudoaureport -sSyscall Report ======================================= # date time syscall pid comm auid event ======================================= 1. 16/02/09 17:45:01 2 20343 cron -1 2279 2. 16/02/09 17:45:02 83 20350 mktemp 0 2284 3. 16/02/09 17:45:02 83 20351 mkdir 0 2285
To analyze the audit log from an executable's point of view, use the
aureport -x command. This command
generates a numbered list of all executable events including date,
time, name of the executable, the terminal it is run in, the host
executing it, the audit ID, and event number.
aureport -x
Executable Report
====================================
# date time exe term host auid event
====================================
1. 13/02/09 15:08:26 /usr/sbin/sshd sshd 192.168.2.100 -1 12
2. 13/02/09 15:08:28 /usr/lib/gdm/gdm-session-worker :0 ? -1 13
3. 13/02/09 15:08:28 /usr/sbin/sshd ssh 192.168.2.100 -1 14
To generate a report from the audit log that focuses on file access,
use the aureport -f command. This
command generates a numbered list of all file-related events
including date, time, name of the accessed file, number of the system
call accessing it, success or failure of the command, the executable
accessing the file, audit ID, and event number.
tux >sudoaureport -fFile Report =============================================== # date time file syscall success exe auid event =============================================== 1. 16/02/09 17:45:01 /etc/shadow 2 yes /usr/sbin/cron -1 2279 2. 16/02/09 17:45:02 /tmp/ 83 yes /bin/mktemp 0 2284 3. 16/02/09 17:45:02 /var 83 no /bin/mkdir 0 2285
To generate a report from the audit log that illustrates which users
are running what executables on your system, use the
aureport -u command. This command
generates a numbered list of all user-related events including date,
time, audit ID, terminal used, host, name of the executable, and an
event ID.
aureport -u
User ID Report
====================================
# date time auid term host exe event
====================================
1. 13/02/09 15:08:26 -1 sshd 192.168.2.100 /usr/sbin/sshd 12
2. 13/02/09 15:08:28 -1 :0 ? /usr/lib/gdm/gdm-session-worker 13
3. 14/02/09 08:25:39 -1 ssh 192.168.2.101 /usr/sbin/sshd 14
To create a report that focuses on login attempts to your machine,
run the aureport -l command. This
command generates a numbered list of all login-related events
including date, time, audit ID, host and terminal used, name of the
executable, success or failure of the attempt, and an event ID.
tux >sudoaureport -l -iLogin Report ============================================ # date time auid host term exe success event ============================================ 1. 13/02/09 15:08:31 tux: 192.168.2.100 sshd /usr/sbin/sshd no 19 2. 16/02/09 12:39:05 root: 192.168.2.101 sshd /usr/sbin/sshd no 2108 3. 17/02/09 15:29:07 geeko: ? tty3 /bin/login yes 7809
To analyze the logs for a particular time frame, such as only the
working hours of Feb 16, 2009, first find out whether this data is
contained in the current audit.log or whether
the logs have been rotated in by running aureport
-t:
aureport -t
Log Time Range Report
=====================
/var/log/audit/audit.log: 03/02/09 14:13:38.225 - 17/02/09 15:30:01.636
The current audit.log contains all the desired
data. Otherwise, use the -if option to point the
aureport commands to the log file that contains
the needed data.
Then, specify the start date and time and the end date and time of the desired time frame and combine it with the report option needed. This example focuses on login attempts:
tux >sudoaureport -ts 02/16/09 8:00 -te 02/16/09 18:00 -lLogin Report ============================================ # date time auid host term exe success event ============================================ 1. 16/02/09 12:39:05 root: 192.168.2.100 sshd /usr/sbin/sshd no 2108 2. 16/02/09 12:39:12 0 192.168.2.100 /dev/pts/1 /usr/sbin/sshd yes 2114 3. 16/02/09 13:09:28 root: 192.168.2.100 sshd /usr/sbin/sshd no 2131 4. 16/02/09 13:09:32 root: 192.168.2.100 sshd /usr/sbin/sshd no 2133 5. 16/02/09 13:09:37 0 192.168.2.100 /dev/pts/2 /usr/sbin/sshd yes 2139
The start date and time are specified with the -ts
option. Any event that has a time stamp equal to or after your given
start time appears in the report. If you omit the date,
aureport assumes that you meant
today. If you omit the time, it assumes that the
start time should be midnight of the date specified.
Specify the end date and time with the -te option.
Any event that has a time stamp equal to or before your given event
time appears in the report. If you omit the date,
aureport assumes that you meant today. If you omit
the time, it assumes that the end time should be now. Use the same
format for the date and time as for -ts.
All reports except the summary ones are printed in column format and sent to STDOUT, which means that this data can be written to other commands very easily. The visualization scripts introduced in Section 31.8, “Visualizing Audit Data” are examples of how to further process the data generated by audit.
ausearch #
The aureport tool helps you to create overall
summaries of what is happening on the system, but if you are interested
in the details of a particular event, ausearch is the
tool to use.
ausearch allows you to search the audit logs using
special keys and search phrases that relate to most of the flags that
appear in event messages in
/var/log/audit/audit.log. Not all record types
contain the same search phrases. There are no hostname
or uid entries in a PATH record,
for example.
When searching, make sure that you choose appropriate search criteria to
catch all records you need. On the other hand, you could be searching for
a specific type of record and still get various other related records
along with it. This is caused by different parts of the kernel
contributing additional records for events that are related to the one to
find. For example, you would always get a PATH record
along with the SYSCALL record for an
open system call.
Any of the command line options can be combined with logical AND operators to narrow down your search.
When the audit logs have moved to another machine or when you want to
analyze the logs of several machines on your local machine without
wanting to connect to each of these individually, move the logs to a
local file and have ausearch search them locally:
tux >sudoausearch -option -if myfile
Some information, such as user IDs are printed in numeric form. To
convert these into human readable text format, add the
-i option to your ausearch
command.
If you have previously run an audit report or done an
autrace, you should analyze the trail of a
particular event in the log. Most of the report types described in
Section 31.5, “Understanding the Audit Logs and Generating Reports” include audit event IDs in their
output. An audit event ID is the second part of an audit message ID,
which consists of a Unix epoch time stamp and the audit event ID
separated by a colon. All events that are logged from one
application's system call have the same event ID. Use this event ID
with ausearch to retrieve this event's trail from
the log.
Use a command similar to the following:
tux >sudoausearch -a 5207---- time->Tue Feb 17 13:43:58 2009 type=PATH msg=audit(1234874638.599:5207): item=0 name="/var/log/audit/audit.log" inode=1219041 dev=08:06 mode=0100644 ouid=0 ogid=0 rdev=00:00 type=CWD msg=audit(1234874638.599:5207): cwd="/root" type=SYSCALL msg=audit(1234874638.599:5207): arch=c000003e syscall=2 success=yes exit=4 a0=62fb60 a1=0 a2=31 a3=0 items=1 ppid=25400 pid=25616 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=1164 comm="less" exe="/usr/bin/less" key="doc_log"
The ausearch -a command grabs all
records in the logs that are related to the audit event ID provided
and displays them. This option can be combined with any other option.
To search for audit records of a particular message type, use the
ausearch -m
MESSAGE_TYPE command. Examples of
valid message types include PATH,
SYSCALL, and USER_LOGIN. Running
ausearch -m without a message type
displays a list of all message types.
To view records associated with a particular login user ID, use the
ausearch -ul command. It displays
any records related to the user login ID specified provided that user
had been able to log in successfully.
View records related to any of the user IDs (both user ID and
effective user ID) with ausearch
-ua. View reports related to a particular user ID
with ausearch -ui
UID. Search for records related to
a particular effective user ID, use the ausearch
-ue EUID. Searching for a
user ID means the user ID of the user creating a process. Searching
for an effective user ID means the user ID and privileges that are
required to run this process.
View records related to any of the group IDs (both group ID and
effective group ID) with the ausearch
-ga command. View reports related to a particular
user ID with ausearch -gi
GID. Search for records related to
a particular effective group ID, use ausearch
-ge EGID.
View records related to a certain command, using the
ausearch -c
COMM_NAME command, for example,
ausearch -c less for all records
related to the less command.
View records related to a certain executable with the
ausearch -x
EXE command, for example
ausearch -x /usr/bin/less for all
records related to the /usr/bin/less executable.
View records related to a certain system call with the
ausearch -sc
SYSCALL command, for example,
ausearch -sc open for all records related to the
open system call.
View records related to a certain process ID with the
ausearch -p
PID command, for example
ausearch -p 13368 for all records
related to this process ID.
View records containing a certain system call success value with
ausearch -sv
SUCCESS_VALUE, for example,
ausearch -sv yes for all
successful system calls.
View records containing a certain file name with
ausearch -f
FILE_NAME, for example,
ausearch -f /foo/bar for all
records related to the /foo/bar file. Using the
file name alone would work as well, but using relative paths does not
work.
View records of events related to a certain terminal only with
ausearch -tm
TERM, for example,
ausearch -tm ssh to view all
records related to events on the SSH terminal and
ausearch -tm tty to view all
events related to the console.
View records related to a certain remote host name with
ausearch -hn
HOSTNAME, for example,
ausearch -hn jupiter.example.com. You can
use a host name, fully qualified domain name, or numeric network
address.
View records that contain a certain key assigned in the audit rule set
to identify events of a particular type. Use the
ausearch -k
KEY_FIELD, for example,
ausearch -k CFG_etc to display any
records containing the CFG_etc key.
View records that contain a certain string assigned in the audit rule
set to identify events of a particular type. The whole string will be
matched on file name, host name, and terminal. Use the
ausearch -w
WORD.
Use -ts and -te to limit the scope
of your searches to a certain time frame. The -ts
option is used to specify the start date and time and the
-te option is used to specify the end date and time.
These options can be combined with any of the above. The use of these
options is similar to use with aureport.
autrace #
In addition to monitoring your system using the rules you set up, you can
also perform dedicated audits of individual processes using the
autrace command. autrace works
similarly to the strace command, but gathers slightly
different information. The output of autrace is
written to /var/log/audit/audit.log and does not
look any different from the standard audit log entries.
When performing an autrace on a process, make sure
that any audit rules are purged from the queue to avoid these rules
clashing with the ones autrace adds itself. Delete the
audit rules with the auditctl -D
command. This stops all normal auditing.
tux >sudoauditctl -DNo rulesautrace /usr/bin/lessWaiting to execute: /usr/bin/less Cleaning up... No rules Trace complete. You can locate the records with 'ausearch -i -p 7642'
Always use the full path to the executable to track with
autrace. After the trace is complete,
autrace provides the event ID of the trace, so you can
analyze the entire data trail with ausearch. To
restore the audit system to use the audit rule set again, restart the
audit daemon with systemctl restart auditd.
Neither the data trail in /var/log/audit/audit.log
nor the different report types generated by aureport,
described in Section 31.5.2, “Generating Custom Audit Reports”, provide an
intuitive reading experience to the user. The aureport
output is formatted in columns and thus easily available to any sed,
Perl, or awk scripts that users might connect to the audit framework to
visualize the audit data.
The visualization scripts (see Section 32.6, “Configuring Log Visualization”) are one example of how to use standard Linux tools available with openSUSE Leap or any other Linux distribution to create easy-to-read audit output. The following examples help you understand how the plain audit reports can be transformed into human readable graphics.
The first example illustrates the relationship of programs and system
calls. To get to this kind of data, you need to determine the appropriate
aureport command that delivers the source data from
which to generate the final graphic:
tux >sudoaureport -s -iSyscall Report ======================================= # date time syscall pid comm auid event ======================================= 1. 16/02/09 17:45:01 open 20343 cron unset 2279 2. 16/02/09 17:45:02 mkdir 20350 mktemp root 2284 3. 16/02/09 17:45:02 mkdir 20351 mkdir root 2285 ...
The first thing that the visualization script needs to do on this report
is to extract only those columns that are of interest, in this example,
the syscall and the comm columns.
The output is sorted and duplicates removed then the final output is
written into the visualization program itself:
LC_ALL=C aureport -s -i | awk '/^[0-9]/ { print $6" "$4 }' | sort | uniq | mkgraph
The second example illustrates the different types of events and how many
of each type have been logged. The appropriate
aureport command to extract this kind of information
is aureport -e:
tux >sudoaureport -e -i --summary Event Summary Report ====================== total type ====================== 2434 SYSCALL 816 USER_START 816 USER_ACCT 814 CRED_ACQ 810 LOGIN 806 CRED_DISP 779 USER_END 99 CONFIG_CHANGE 52 USER_LOGIN
Because this type of report already contains a two column output, it is only fed into the visualization script and transformed into a bar chart.
tux >sudoaureport -e -i --summary | mkbar events
For background information about the visualization of audit data, refer to the Web site of the audit project at http://people.redhat.com/sgrubb/audit/visualize/index.html.
The auditing system also allows external applications to access and
use the auditd daemon in real
time. This feature is provided by so called audit
dispatcher which allows, for example, intrusion detection
systems to use auditd to receive
enhanced detection information.
audispd is a daemon which
controls the audit dispatcher. It is normally started by
auditd.
audispd takes audit events and
distributes them to the programs which want to analyze them in real time.
Configuration of auditd is stored
in /etc/audisp/audispd.conf. The file has the
following options:
q_depth
Specifies the size of the event dispatcher internal queue. If syslog complains about audit events getting dropped, increase this value. Default is 80.
overflow_action
Specifies the way the audit daemon will react to the internal queue
overflow. Possible values are ignore (nothing
happens), syslog (issues a warning to syslog),
suspend (audispd will stop processing events),
single (the computer system will be put in single
user mode), or halt (shuts the system down).
priority_boost
Specifies the priority for the audit event dispatcher (in addition to the audit daemon priority itself). Default is 4 which means no change in priority.
name_format
Specifies the way the computer node name is inserted into the audit
event. Possible values are none (no computer name is
inserted), hostname (name returned by the
gethostname system call),
fqd (fully qualified domain name of the machine),
numeric (IP address of the machine), or
user (user defined string from the
name option). Default is none.
name
Specifies a user defined string which identifies the machine. The
name_format option must be set to
user, otherwise this option is ignored.
max_restarts
A non-negative number that tells the audit event dispatcher how many times it can try to restart a crashed plug-in. The default is 10.
q_depth = 80 overflow_action = SYSLOG priority_boost = 4 name_format = HOSTNAME #name = mydomain
The plug-in programs install their configuration files in a special
directory dedicated to audispd
plug-ins. It is /etc/audisp/plugins.d by default.
The plug-in configuration files have the following options:
active
Specifies if the program will use
audispd. Possible values are
yes or no.
direction
Specifies the way the plug-in was designed to communicate with audit.
It informs the event dispatcher in which directions the events flow.
Possible values are in or out.
path
Specifies the absolute path to the plug-in executable. In case of internal plug-ins, this option specifies the plug-in name.
type
Specifies the way the plug-in is to be run. Possible values are
builtin or always. Use
builtin for internal plug-ins
(af_unix and syslog) and
always for most (if not all) other plug-ins. Default
is always.
args
Specifies the argument that is passed to the plug-in program. Normally, plug-in programs read their arguments from their configuration file and do not need to receive any arguments. There is a limit of 2 arguments.
format
Specifies the format of data that the audit dispatcher passes to the
plug-in program. Valid options are binary or
string. binary passes the data
exactly as the event dispatcher receives them from the audit daemon.
string instructs the dispatcher to change the event
into a string that is parseable by the audit parsing library. Default
is string.
active = no direction = out path = builtin_syslog type = builtin args = LOG_INFO format = string
This chapter shows how to set up a simple audit scenario. Every step involved in configuring and enabling audit is explained in detail. After you have learned to set up audit, consider a real-world example scenario in Chapter 33, Introducing an Audit Rule Set.
To set up audit on openSUSE Leap, you need to complete the following steps:
Make sure that all required packages are installed:
audit,
audit-libs, and optionally
audit-libs-python. To use the
log visualization as described in Section 32.6, “Configuring Log Visualization”,
install gnuplot and
graphviz from the
openSUSE Leap media.
Determine the components to audit. Refer to Section 32.1, “Determining the Components to Audit” for details.
Check or modify the basic audit daemon configuration. Refer to Section 32.2, “Configuring the Audit Daemon” for details.
Enable auditing for system calls. Refer to Section 32.3, “Enabling Audit for System Calls” for details.
Compose audit rules to suit your scenario. Refer to Section 32.4, “Setting Up Audit Rules” for details.
Generate logs and configure tailor-made reports. Refer to Section 32.5, “Configuring Audit Reports” for details.
Configure optional log visualization. Refer to Section 32.6, “Configuring Log Visualization” for details.
Before configuring any of the components of the audit system, make sure
that the audit daemon is not running by entering systemctl
status auditd as root. On a default
openSUSE Leap system, audit is started on boot, so you need to turn it
off by entering systemctl stop auditd. Start
the daemon after configuring it with systemctl start
auditd.
Before starting to create your own audit configuration, determine to which degree you want to use it. Check the following general rules to determine which use case best applies to you and your requirements:
If you require a full security audit for CAPP/EAL certification, enable full audit for system calls and configure watches on various configuration files and directories, similar to the rule set featured in Chapter 33, Introducing an Audit Rule Set.
If you need to trace a process based on the audit rules, use
autrace.
If you require file and directory watches to track access to important or security-sensitive data, create a rule set matching these requirements. Enable audit as described in Section 32.3, “Enabling Audit for System Calls” and proceed to Section 32.4, “Setting Up Audit Rules”.
The basic setup of the audit daemon is done by editing
/etc/audit/auditd.conf. You may also use YaST
to configure the basic settings by calling › › . Use the
tabs and for
configuration.
log_file = /var/log/audit/audit.log log_format = RAW log_group = root priority_boost = 4 flush = INCREMENTAL freq = 20 num_logs = 5 disp_qos = lossy dispatcher = /sbin/audispd name_format = NONE ##name = mydomain max_log_file = 6 max_log_file_action = ROTATE space_left = 75 space_left_action = SYSLOG action_mail_acct = root admin_space_left = 50 admin_space_left_action = SUSPEND disk_full_action = SUSPEND disk_error_action = SUSPEND ##tcp_listen_port = tcp_listen_queue = 5 tcp_max_per_addr = 1 ##tcp_client_ports = 1024-65535 tcp_client_max_idle = 0 cp_client_max_idle = 0
The default settings work reasonably well for many setups. Some values,
such as num_logs, max_log_file,
space_left, and admin_space_left
depend on the size of your deployment. If disk space is limited, you
should reduce the number of log files to keep if they are rotated
and you should get an earlier warning if disk space is running out.
For a CAPP-compliant setup, adjust the values for
log_file, flush,
max_log_file, max_log_file_action,
space_left, space_left_action,
admin_space_left,
admin_space_left_action,
disk_full_action, and
disk_error_action, as described in
Section 31.2, “Configuring the Audit Daemon”. An example CAPP-compliant
configuration looks like this:
log_file = PATH_TO_SEPARATE_PARTITION/audit.log log_format = RAW priority_boost = 4 flush = SYNC ### or DATA freq = 20 num_logs = 4 dispatcher = /sbin/audispd disp_qos = lossy max_log_file = 5 max_log_file_action = KEEP_LOGS space_left = 75 space_left_action = EMAIL action_mail_acct = root admin_space_left = 50 admin_space_left_action = SINGLE ### or HALT disk_full_action = SUSPEND ### or HALT disk_error_action = SUSPEND ### or HALT
The ### precedes comments where you can choose from
several options. Do not add the comments to your actual configuration
files.
Refer to Section 31.2, “Configuring the Audit Daemon” for detailed background
information about the auditd.conf configuration
parameters.
If the audit framework is not installed, install the
audit package. A standard openSUSE Leap
system does not have auditd running by default. Enable it with:
tux >sudosystemctl enable auditd
There are different levels of auditing activity available:
Out of the box (without any further configuration) auditd logs only
events concerning its own configuration changes to
/var/log/audit/audit.log. No events (file access,
system call, etc.) are generated by the kernel audit component until
requested by auditctl. However, other kernel
components and modules may log audit events outside of the control of
auditctl and these appear in the audit log. By
default, the only module that generates audit events is AppArmor.
To audit system calls and get meaningful file watches, you need to enable audit contexts for system calls.
As you need system call auditing capabilities even when you are
configuring plain file or directory watches, you need to enable audit
contexts for system calls. To enable audit contexts for the duration of
the current session only, execute auditctl -e 1 as
root. To disable this feature, execute auditctl -e
0 as root.
The audit contexts are enabled by default. To turn this feature off
temporarily, use auditctl -e 0.
Using audit rules, determine which aspects of the system should be analyzed by audit. Normally this includes important databases and security-relevant configuration files. You may also analyze various system calls in detail if a broad analysis of your system is required. A very detailed example configuration that includes most of the rules that are needed in a CAPP compliant environment is available in Chapter 33, Introducing an Audit Rule Set.
Audit rules can be passed to the audit daemon on the
auditctl command line and by composing a rule
set in /etc/audit/audit.rules which is processed
whenever the audit daemon is started. To customize
/etc/audit/audit.rules either edit it directly, or
use YaST: › › . Rules passed on the command line are
not persistent and need to be re-entered when the audit daemon is
restarted.
A simple rule set for very basic auditing on a few important files and directories could look like this:
# basic audit system parameters -D -b 8192 -f 1 -e 1 # some file and directory watches with keys -w /var/log/audit/ -k LOG_audit -w /etc/audit/auditd.conf -k CFG_audit_conf -p rxwa -w /etc/audit/audit.rules -k CFG_audit_rules -p rxwa -w /etc/passwd -k CFG_passwd -p rwxa -w /etc/sysconfig/ -k CFG_sysconfig # an example system call rule -a entry,always -S umask ### add your own rules
When configuring the basic audit system parameters (such as the backlog
parameter -b) test these settings with your intended
audit rule set to determine whether the backlog size is appropriate for
the level of logging activity caused by your audit rule set. If your
chosen backlog size is too small, your system might not be able to handle
the audit load and consult the failure flag (-f) when
the backlog limit is exceeded.
When choosing the failure flag, note that -f 2 tells
your system to perform an immediate shutdown without flushing any
pending data to disk when the limits of your audit system are exceeded.
Because this shutdown is not a clean shutdown, restrict the use of
-f 2 to only the most security-conscious environments
and use -f 1 (system continues to run, issues a warning
and audit stops) for any other setup to avoid loss of data or data
corruption.
Directory watches produce less verbose output than separate file watches
for the files under these directories. To get detailed logging for your
system configuration in /etc/sysconfig, for example,
add watches for each file. Audit does not support globbing,
which means you cannot create a rule that says -w
/etc/* and watches all files and directories below
/etc.
For better identification in the log file, a key has been added to each
of the file and directory watches. Using the key, it is easier to comb
the logs for events related to a certain rule. When creating keys,
distinguish between mere log file watches and configuration file watches
by using an appropriate prefix with the key, in this case
LOG for a log file watch and CFG
for a configuration file watch. Using the file name as part of the key
also makes it easier for you to identify events of this type in the log
file.
Another thing to keep in mind when creating file and directory watches is that audit cannot deal with files that do not exist when the rules are created. Any file that is added to your system while audit is already running is not watched unless you extend the rule set to watch this new file.
For more information about creating custom rules, refer to Section 31.4, “Passing Parameters to the Audit System”.
After you change audit rules, always restart the audit daemon with
systemctl restart auditd to reread the
changed rules.
To avoid having to dig through the raw audit logs to get an impression of what your system is currently doing, run custom audit reports at certain intervals. Custom audit reports enable you to focus on areas of interest and get meaningful statistics on the nature and frequency of the events you are monitoring. To analyze individual events in detail, use the ausearch tool.
Before setting up audit reporting, consider the following:
What types of events do you want to monitor by generating regular reports? Select the appropriate aureport command lines as described in Section 31.5.2, “Generating Custom Audit Reports”.
What do you want to do with the audit reports? Decide whether to create graphical charts from the data accumulated or whether it should be transferred into any sort of spreadsheet or database. Set up the aureport command line and further processing similar to the examples shown in Section 32.6, “Configuring Log Visualization” if you want to visualize your reports.
When and at which intervals should the reports run? Set up appropriate automated reporting using cron.
For this example, assume that you are interested in finding out about any attempts to access your audit, PAM, and system configuration. Proceed as follows to find out about file events on your system:
Generate a full summary report of all events and check for any anomalies in the summary report, for example, have a look at the “failed syscalls” record, because these might have failed because of insufficient permissions to access a file or a file not being there:
tux >sudoaureportSummary Report ====================== Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 16:30:10.352 Selected time for report: 03/02/09 14:13:38 - 17/02/09 16:30:10.352 Number of changes in configuration: 24 Number of changes to accounts, groups, or roles: 0 Number of logins: 9 Number of failed logins: 15 Number of authentications: 19 Number of failed authentications: 578 Number of users: 3 Number of terminals: 15 Number of host names: 4 Number of executables: 20 Number of files: 279 Number of AVC's: 0 Number of MAC events: 0 Number of failed syscalls: 994 Number of anomaly events: 0 Number of responses to anomaly events: 0 Number of crypto events: 0 Number of keys: 2 Number of process IDs: 1238 Number of events: 5435
Run a summary report for failed events and check the “files” record for the number of failed file access events:
tux >sudoaureport--failedFailed Summary Report ====================== Range of time in logs: 03/02/09 14:13:38.225 - 17/02/09 16:30:10.352 Selected time for report: 03/02/09 14:13:38 - 17/02/09 16:30:10.352 Number of changes in configuration: 0 Number of changes to accounts, groups, or roles: 0 Number of logins: 0 Number of failed logins: 15 Number of authentications: 0 Number of failed authentications: 578 Number of users: 1 Number of terminals: 7 Number of host names: 4 Number of executables: 12 Number of files: 77 Number of AVC's: 0 Number of MAC events: 0 Number of failed syscalls: 994 Number of anomaly events: 0 Number of responses to anomaly events: 0 Number of crypto events: 0 Number of keys: 2 Number of process IDs: 713 Number of events: 1589
To list the files that could not be accessed, run a summary report of failed file events:
tux >sudoaureport-f -i --failed --summaryFailed File Summary Report =========================== total file =========================== 80 /var 80 spool 80 cron 80 lastrun 46 /usr/lib/locale/en_GB.UTF-8/LC_CTYPE 45 /usr/lib/locale/locale-archive 38 /usr/lib/locale/en_GB.UTF-8/LC_IDENTIFICATION 38 /usr/lib/locale/en_GB.UTF-8/LC_MEASUREMENT 38 /usr/lib/locale/en_GB.UTF-8/LC_TELEPHONE 38 /usr/lib/locale/en_GB.UTF-8/LC_ADDRESS 38 /usr/lib/locale/en_GB.UTF-8/LC_NAME 38 /usr/lib/locale/en_GB.UTF-8/LC_PAPER 38 /usr/lib/locale/en_GB.UTF-8/LC_MESSAGES 38 /usr/lib/locale/en_GB.UTF-8/LC_MONETARY 38 /usr/lib/locale/en_GB.UTF-8/LC_COLLATE 38 /usr/lib/locale/en_GB.UTF-8/LC_TIME 38 /usr/lib/locale/en_GB.UTF-8/LC_NUMERIC 8 /etc/magic.mgc ...
To focus this summary report on a few files or directories of interest
only, such as /etc/audit/auditd.conf,
/etc/pam.d, and
/etc/sysconfig, use a command similar to the
following:
tux >sudoaureport -f -i --failed --summary |grep -e "/etc/audit/auditd.conf" -e "/etc/pam.d/" -e "/etc/sysconfig"1 /etc/sysconfig/displaymanager
From the summary report, then proceed to isolate these items of interest from the log and find out their event IDs for further analysis:
tux >sudoaureport -f -i --failed |grep -e "/etc/audit/auditd.conf" -e "/etc/pam.d/" -e "/etc/sysconfig"993. 17/02/09 16:47:34 /etc/sysconfig/displaymanager readlink no /bin/vim-normal root 7887 994. 17/02/09 16:48:23 /etc/sysconfig/displaymanager getxattr no /bin/vim-normal root 7889
Use the event ID to get a detailed record for each item of interest:
tux >sudoausearch -a7887 -i ---- time->Tue Feb 17 16:48:23 2009 type=PATH msg=audit(1234885703.090:7889): item=0 name="/etc/sysconfig/displaymanager" inode=369282 dev=08:06 mode=0100644 ouid=0 ogid=0 rdev=00:00 type=CWD msg=audit(1234885703.090:7889): cwd="/root" type=SYSCALL msg=audit(1234885703.090:7889): arch=c000003e syscall=191 success=no exit=-61 a0=7e1e20 a1=7f90e4cf9187 a2=7fffed5b57d0 a3=84 items=1 ppid=25548 pid=23045 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=1166 comm="vim" exe="/bin/vim-normal" key=(null)
If you are interested in events during a particular period of time, trim
down the reports by using start and end dates and times with your
aureport commands (-ts and
-te). For more information, refer to
Section 31.5.2, “Generating Custom Audit Reports”.
All steps (except for the last one) can be run automatically and would
easily be scriptable and configured as cron jobs. Any of the
--failed --summary reports could be transformed easily
into a bar chart that plots files versus failed access attempts. For more
information about visualizing audit report data, refer to
Section 32.6, “Configuring Log Visualization”.
Using the scripts mkbar and mkgraph
you can illustrate your audit statistics with various graphs and charts.
As with any other aureport command, the plotting
commands are scriptable and can easily be configured to run as cron jobs.
mkbar and mkgraph were created by
Steve Grubb at Red Hat. They are available from
http://people.redhat.com/sgrubb/audit/visualize/.
Because the current version of audit in openSUSE Leap does not ship
with these scripts, proceed as follows to make them available on your
system:
Use mkbar and mkgraph at your own
risk. Any content downloaded from the Web is potentially dangerous
to your system, even more so when run with root privileges.
Download the scripts to root's ~/bin
directory:
tux >sudowget http://people.redhat.com/sgrubb/audit/visualize/mkbar -O ~/bin/mkbartux >sudowget http://people.redhat.com/sgrubb/audit/visualize/mkgraph -O ~/bin/mkgraph
Adjust the file permissions to read, write, and execute for
root:
tux >sudochmod 744 ~/bin/mk{bar,graph}
To plot summary reports, such as the ones discussed in
Section 32.5, “Configuring Audit Reports”, use the script
mkbar. Some example commands could look like the
following:
tux >sudoaureport -e -i --summary | mkbar events
tux >sudoaureport -f -i --summary | mkbar files
tux >sudoaureport -l -i --summary | mkbar login
tux >sudoaureport -u -i --summary | mkbar users
tux >sudoaureport -s -i --summary | mkbar syscalls
To create a summary chart of failed events of any of the above event
types, add the --failed option to the respective
aureport command. To cover a certain period of time
only, use the -ts and -te options on
aureport. Any of these commands can be tweaked further by narrowing down
its scope using grep or egrep and regular expressions. See the comments
in the mkbar script for an example. Any of the above
commands produces a PNG file containing a bar chart of the requested
data.
To illustrate the relationship between different kinds of audit objects,
such as users and system calls, use the script
mkgraph. Some example commands could look like the
following:
tux >sudoLC_ALL=C aureport -u -i | awk '/^[0-9]/ { print $4" "$7 }' | sort | uniq | mkgraph users_vs_exec
tux >sudoLC_ALL=C aureport -f -i | awk '/^[0-9]/ { print $8" "$4 }' | sort | uniq | mkgraph users_vs_files
tux >sudoLC_ALL=C aureport -s -i | awk '/^[0-9]/ { print $4" "$6 }' | sort | uniq | mkgraph syscall_vs_com
tux >sudoLC_ALL=C aureport -s -i | awk '/^[0-9]/ { print $5" "$4 }' | sort | uniq | mkgraph | syscall_vs_file
Graphs can also be combined to illustrate complex relationships. See the
comments in the mkgraph script for further information
and an example. The graphs produced by this script are created in
PostScript format by default, but you can change the output format by
changing the EXT variable in the script from
ps to png or
jpg.
The following example configuration illustrates how audit can be used to monitor your system. It highlights the most important items that need to be audited to cover the list of auditable events specified by Controlled Access Protection Profile (CAPP).
The example rule set is divided into the following sections:
Basic audit configuration (see Section 33.1, “Adding Basic Audit Configuration Parameters”)
Watches on audit log files and configuration files (see Section 33.2, “Adding Watches on Audit Log Files and Configuration Files”)
Monitoring operations on file system objects (see Section 33.3, “Monitoring File System Objects”)
Monitoring security databases (see Section 33.4, “Monitoring Security Configuration Files and Databases”)
Monitoring miscellaneous system calls (Section 33.5, “Monitoring Miscellaneous System Calls”)
Filtering system call arguments (see Section 33.6, “Filtering System Call Arguments”)
To transform this example into a configuration file to use in your live setup, proceed as follows:
Choose the appropriate settings for your setup and adjust them.
Adjust the file /etc/audit/audit.rules by adding
rules from the examples below or by modifying existing rules.
Do not copy the example below into your audit setup without adjusting it to your needs. Determine what and to what extent to audit.
The entire audit.rules is a collection of
auditctl commands. Every line in this file expands to a
full auditctl command line. The syntax used in the rule
set is the same as that of the auditctl command.
-D1 -b 81922 -f 23
Delete any preexisting rules before starting to define new ones. | |
Set the number of buffers to take the audit messages. Depending on the level of audit logging on your system, increase or decrease this figure. | |
Set the failure flag to use when the kernel needs to handle critical
errors. Possible values are |
By emptying the rule queue with the -D option, you make
sure that audit does not use any other rule set than what you are
offering it by means of this file. Choosing an appropriate buffer number
(-b) is vital to avoid having your system fail because
of too high an audit load. Choosing the panic failure flag -f
2 ensures that your audit records are complete even if the
system is encountering critical errors. By shutting down the system on a
critical error, audit makes sure that no process escapes from its control
as it otherwise might if level 1 (printk) were chosen.
Before using your audit rule set on a live system, make sure that the
setup has been thoroughly evaluated on test systems using the
worst case production workload. It is even more
critical that you do this when specifying the -f 2
flag, because this instructs the kernel to panic (perform an immediate
halt without flushing pending data to disk) if any thresholds are
exceeded. Consider the use of the -f 2 flag for only
the most security-conscious environments.
Adding watches on your audit configuration files and the log files themselves ensures that you can track any attempt to tamper with the configuration files or detect any attempted accesses to the log files.
Creating watches on a directory is not necessarily sufficient if you need events for file access. Events on directory access are only triggered when the directory's inode is updated with metadata changes. To trigger events on file access, add watches for each file to monitor.
-w /var/log/audit/ 1 -w /var/log/audit/audit.log -w /var/log/audit/audit_log.1 -w /var/log/audit/audit_log.2 -w /var/log/audit/audit_log.3 -w /var/log/audit/audit_log.4 -w /etc/audit/auditd.conf -p wa2 -w /etc/audit/audit.rules -p wa -w /etc/libaudit.conf -p wa
Set a watch on the directory where the audit log is located. Trigger an event for any type of access attempt to this directory. If you are using log rotation, add watches for the rotated logs as well. | |
Set a watch on an audit configuration file. Log all write and attribute change attempts to this file. |
Auditing system calls helps track your system's activity well beyond the application level. By tracking file system–related system calls, get an idea of how your applications are using these system calls and determine whether that use is appropriate. By tracking mount and unmount operations, track the use of external resources (removable media, remote file systems, etc.).
Auditing system calls results in a high logging activity. This activity, in turn, puts a heavy load on the kernel. With a kernel less responsive than usual, the system's backlog and rate limits might be exceeded. Carefully evaluate which system calls to include in your audit rule set and adjust the log settings accordingly. See Section 31.2, “Configuring the Audit Daemon” for details on how to tweak the relevant settings.
-a entry,always -S chmod -S fchmod -S chown -S chown32 -S fchown -S fchown32 -S lchown -S lchown321 -a entry,always -S creat -S open -S truncate -S truncate64 -S ftruncate -S ftruncate642 -a entry,always -S mkdir -S rmdir3 -a entry,always -S unlink -S rename -S link -S symlink4 -a entry,always -S setxattr5 -a entry,always -S lsetxattr -a entry,always -S fsetxattr -a entry,always -S removexattr -a entry,always -S lremovexattr -a entry,always -S fremovexattr -a entry,always -S mknod6 -a entry,always -S mount -S umount -S umount27
Enable an audit context for system calls related to changing file
ownership and permissions. Depending on the hardware architecture of
your system, enable or disable the | |
Enable an audit context for system calls related to file content modification. Depending on the hardware architecture of your system, enable or disable the *64 rules. 64-bit systems, like AMD64/Intel 64, require the *64 rules to be removed. | |
Enable an audit context for any directory operation, like creating or removing a directory. | |
Enable an audit context for any linking operation, such as creating a symbolic link, creating a link, unlinking, or renaming. | |
Enable an audit context for any operation related to extended file system attributes. | |
Enable an audit context for the | |
Enable an audit context for any mount or umount operation. For the
x86 architecture, disable the |
To make sure that your system is not made to do undesired things, track
any attempts to change the cron and
at configurations or the lists of scheduled
jobs. Tracking any write access to the user, group, password and login
databases and logs helps you identify any attempts to manipulate your
system's user database.
Tracking changes to your system configuration (kernel, services, time, etc.) helps you spot any attempts of others to manipulate essential functionality of your system. Changes to the PAM configuration should also be monitored in a secure environment, because changes in the authentication stack should not be made by anyone other than the administrator, and it should be logged which applications are using PAM and how it is used. The same applies to any other configuration files related to secure authentication and communication.
1 -w /var/spool/atspool -w /etc/at.allow -w /etc/at.deny -w /etc/cron.allow -p wa -w /etc/cron.deny -p wa -w /etc/cron.d/ -p wa -w /etc/cron.daily/ -p wa -w /etc/cron.hourly/ -p wa -w /etc/cron.monthly/ -p wa -w /etc/cron.weekly/ -p wa -w /etc/crontab -p wa -w /var/spool/cron/root 2 -w /etc/group -p wa -w /etc/passwd -p wa -w /etc/shadow -w /etc/login.defs -p wa -w /etc/securetty -w /var/log/lastlog 3 -w /etc/hosts -p wa -w /etc/sysconfig/ w /etc/init.d/ w /etc/ld.so.conf -p wa w /etc/localtime -p wa w /etc/sysctl.conf -p wa w /etc/modprobe.d/ w /etc/modprobe.conf.local -p wa w /etc/modprobe.conf -p wa 4 w /etc/pam.d/ 5 -w /etc/aliases -p wa -w /etc/postfix/ -p wa 6 -w /etc/ssh/sshd_config -w /etc/stunnel/stunnel.conf -w /etc/stunnel/stunnel.pem -w /etc/vsftpd.ftpusers -w /etc/vsftpd.conf 7 -a exit,always -S sethostname -w /etc/issue -p wa -w /etc/issue.net -p wa
Set watches on the | |
Set watches on the user, group, password, and login databases and logs and set labels to better identify any login-related events, such as failed login attempts. | |
Set a watch and a label on the static host name configuration in
| |
Set watches on the PAM configuration directory. If you are interested in particular files below the directory level, add explicit watches to these files as well. | |
Set watches to the postfix configuration to log any write attempt or attribute change and use labels for better tracking in the logs. | |
Set watches and labels on the SSH,
| |
Perform an audit of the |
Apart from auditing file system related system calls, as described in
Section 33.3, “Monitoring File System Objects”, you can also track various other
system calls. Tracking task creation helps you understand your
applications' behavior. Auditing the umask
system call lets you track how processes modify creation mask. Tracking
any attempts to change the system time helps you identify anyone or any
process trying to manipulate the system time.
1 -a entry,always -S clone -S fork -S vfork 2 -a entry,always -S umask 3 -a entry,always -S adjtimex -S settimeofday
In addition to the system call auditing introduced in Section 33.3, “Monitoring File System Objects” and Section 33.5, “Monitoring Miscellaneous System Calls”, you can track application behavior to an even higher degree. Applying filters helps you focus audit on areas of primary interest to you. This section introduces filtering system call arguments for non-multiplexed system calls like access and for multiplexed ones like socketcall or ipc. Whether system calls are multiplexed depends on the hardware architecture used. Both socketcall and ipc are not multiplexed on 64-bit architectures, such as AMD64/Intel 64.
Auditing system calls results in high logging activity, which in turn puts a heavy load on the kernel. With a kernel less responsive than usual, the system's backlog and rate limits might well be exceeded. Carefully evaluate which system calls to include in your audit rule set and adjust the log settings accordingly. See Section 31.2, “Configuring the Audit Daemon” for details on how to tweak the relevant settings.
The access system call checks whether a process would be allowed to read,
write or test for the existence of a file or file system object. Using
the -F filter flag, build rules matching specific access
calls in the format-F
a1=ACCESS_MODE. Check
/usr/include/fcntl.h for a list of possible
arguments to the access system call.
-a entry,always -S access -F a1=41 -a entry,always -S access -F a1=62 -a entry,always -S access -F a1=73
Audit the access system call, but only if the second argument of the
system call ( | |
Audit the access system call, but only if the second argument of the
system call ( | |
Audit the access system call, but only if the second argument of the
system call ( |
The socketcall system call is a multiplexed system call. Multiplexed
means that there is only one system call for all possible calls and that
libc passes the actual system call to use as the first argument
(a0). Check the manual page of socketcall for possible
system calls and refer to
/usr/src/linux/include/linux/net.h for a list of
possible argument values and system call names. Audit supports filtering
for specific system calls using a -F
a0=SYSCALL_NUMBER.
-a entry,always -S socketcall -F a0=1 -F a1=101 ## Use this line on x86_64, ia64 instead #-a entry,always -S socket -F a0=10 -a entry,always -S socketcall -F a0=52 ## Use this line on x86_64, ia64 instead #-a entry, always -S accept
Audit the socket(PF_INET6) system call. The | |
Audit the socketcall system call. The filter flag is set to filter for
|
The ipc system call is another example of multiplexed system calls. The
actual call to invoke is determined by the first argument passed to the
ipc system call. Filtering for these arguments helps you focus on those
IPC calls of interest to you. Check
/usr/include/linux/ipc.h for possible argument
values.
1 ## msgctl -a entry,always -S ipc -F a0=14 ## msgget -a entry,always -S ipc -F a0=13 ## Use these lines on x86_64, ia64 instead #-a entry,always -S msgctl #-a entry,always -S msgget 2 ## semctl -a entry,always -S ipc -F a0=3 ## semget -a entry,always -S ipc -F a0=2 ## semop -a entry,always -S ipc -F a0=1 ## semtimedop -a entry,always -S ipc -F a0=4 ## Use these lines on x86_64, ia64 instead #-a entry,always -S semctl #-a entry,always -S semget #-a entry,always -S semop #-a entry,always -S semtimedop 3 ## shmctl -a entry,always -S ipc -F a0=24 ## shmget -a entry,always -S ipc -F a0=23 ## Use these lines on x86_64, ia64 instead #-a entry,always -S shmctl #-a entry,always -S shmget
Audit system calls related to IPC SYSV message queues. In this case,
the | |
Audit system calls related to IPC SYSV message semaphores. In this
case, the | |
Audit system calls related to IPC SYSV shared memory. In this case, the
|
After configuring a few rules generating events and populating the logs,
you need to find a way to tell one event from the other. Using the
ausearch command, you can filter the logs for various
criteria. Using ausearch -m
MESSAGE_TYPE, you can at least filter
for events of a certain type. However, to be able to filter for events
related to a particular rule, you need to add a key to this rule in the
/etc/audit/audit.rules file. This key is then added
to the event record every time the rule logs an event. To retrieve these
log entries, simply run ausearch -k
YOUR_KEY to get a list of records
related to the rule carrying this particular key.
As an example, assume you have added the following rule to your rule file:
-w /etc/audit/audit.rules -p wa
Without a key assigned to it, you would probably need to filter for
SYSCALL or PATH events then use
grep or similar tools to isolate any events related to the above rule.
Now, add a key to the above rule, using the -k option:
-w /etc/audit/audit.rules -p wa -k CFG_audit.rules
You can specify any text string as key. Distinguish watches related to
different types of files (configuration files or log files) from one
another using different key prefixes (CFG,
LOG, etc.) followed by the file name. Finding any
records related to the above rule now comes down to the following:
ausearch -k CFG_audit.rules
----
time->Thu Feb 19 09:09:54 2009
type=PATH msg=audit(1235030994.032:8649): item=3 name="audit.rules~" inode=370603 dev=08:06 mode=0100640 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1235030994.032:8649): item=2 name="audit.rules" inode=370603 dev=08:06 mode=0100640 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1235030994.032:8649): item=1 name="/etc/audit" inode=368599 dev=08:06 mode=040750 ouid=0 ogid=0 rdev=00:00
type=PATH msg=audit(1235030994.032:8649): item=0 name="/etc/audit" inode=368599 dev=08:06 mode=040750 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1235030994.032:8649): cwd="/etc/audit"
type=SYSCALL msg=audit(1235030994.032:8649): arch=c000003e syscall=82 success=yes exit=0 a0=7deeb0 a1=883b30 a2=2 a3=ffffffffffffffff items=4 ppid=25400 pid=32619 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=1164 comm="vim" exe="/bin/vim-normal" key="CFG_audit.rules"There are other resources available containing valuable information about the Linux audit framework:
There are several man pages installed along with the audit tools that provide valuable and very detailed information:
auditd(8)
The Linux audit daemon
auditd.conf(5)
The Linux audit daemon configuration file
auditctl(8)
A utility to assist controlling the kernel's audit system
autrace(8)
A program similar to strace
ausearch(8)
A tool to query audit daemon logs
aureport(8)
A tool that produces summary reports of audit daemon logs
audispd.conf(5)
The audit event dispatcher configuration file
audispd(8)
The audit event dispatcher daemon talking to plug-in programs.
The home page of the Linux audit project. This site contains several specifications relating to different aspects of Linux audit, and a short FAQ.
/usr/share/doc/packages/audit
The audit package itself contains a README with basic design
information and sample .rules files for different
scenarios:
capp.rules: Controlled Access Protection Profile (CAPP) |
lspp.rules: Labeled Security Protection Profile (LSPP) |
nispom.rules: National Industrial Security Program Operating
Manual Chapter 8(NISPOM) |
stig.rules: Secure Technical Implementation Guide (STIG) |
The official Web site of the Common Criteria project. Learn all about the Common Criteria security certification initiative and which role audit plays in this framework.
This appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
vmstat Output on a Lightly Used Machinevmstat Output on a Heavily Used Machine (CPU bound)/etc/logrotate.confprintf Function with Format Specifierstcp_connections.stpcpupower frequency-infocpupower idle-infocpupower monitor OutputCFQCopyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
openSUSE Leap is used for a broad range of usage scenarios in enterprise and scientific data centers. SUSE has ensured openSUSE Leap is set up in a way that it accommodates different operation purposes with optimal performance. However, openSUSE Leap must meet very different demands when employed on a number crunching server compared to a file server, for example.
It is not possible to ship a distribution that is optimized for all workloads. Different workloads vary substantially in some aspects. Most important among those are I/O access patterns, memory access patterns, and process scheduling. A behavior that perfectly suits a certain workload might reduce performance of another workload. For example, I/O-intensive tasks, such as handling database requests, usually have completely different requirements than CPU-intensive tasks, such as video encoding. The versatility of Linux makes it possible to configure your system in a way that it brings out the best in each usage scenario.
This manual introduces you to means to monitor and analyze your system. It describes methods to manage system resources and to tune your system. This guide does not offer recipes for special scenarios, because each server has got its own different demands. It rather enables you to thoroughly analyze your servers and make the most out of them.
Tuning a system requires a carefully planned proceeding. Learn which steps are necessary to successfully improve your system.
Linux offers a large variety of tools to monitor almost every aspect of the system. Learn how to use these utilities and how to read and analyze the system log files.
The Linux kernel itself offers means to examine every nut, bolt and screw of the system. This part introduces you to SystemTap, a scripting language for writing kernel modules that can be used to analyze and filter data. Collect debugging information and find bottlenecks by using kernel probes and Perf. Last, monitor applications with Oprofile.
Learn how to set up a tailor-made system fitting exactly the server's need. Get to know how to use power management while at the same time keeping the performance of a system at a level that matches the current requirements.
The Linux kernel can be optimized either by using sysctl, via the
/proc and /sys file systems
or by kernel command line parameters. This part covers tuning the I/O
performance and optimizing the way how Linux schedules processes. It
also describes basic principles of memory management and shows how
memory management can be fine-tuned to suit needs of specific
applications and usage patterns. Furthermore, it describes how to
optimize network performance.
This part enables you to analyze and handle application or system crashes. It introduces tracing tools such as strace or ltrace and describes how to handle system crashes using Kexec and Kdump.
Documentation for our products is available at http://doc.opensuse.org/, where you can also find the latest updates, and browse or download the documentation in various formats.
In addition, the product documentation
is usually available in your installed system under
/usr/share/doc/manual.
The following documentation is available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Several feedback channels are available:
To report bugs for openSUSE Leap, go to https://bugzilla.opensuse.org/, log in, and click .
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.com. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a concise
description of the problem and refer to the respective section number and
page (or URL).
The following notices and typographical conventions are used in this documentation:
/etc/passwd: directory names and file names
PLACEHOLDER: replace PLACEHOLDER with the actual value
PATH: the environment variable PATH
ls, --help: commands, options, and
parameters
user: users or groups
package name : name of a package
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
Commands that must be run with root privileges. Often you can also
prefix these commands with the sudo command to run them
as non-privileged user.
root #commandtux >sudocommand
Commands that can be run by non-privileged users.
tux >command
Notices
Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.
Important information you should be aware of before proceeding.
Additional information, for example about differences in software versions.
Helpful information, like a guideline or a piece of practical advice.
This manual discusses how to find the reasons for performance problems and provides means to solve these problems. Before you start tuning your system, you should make sure you have ruled out common problems and have found the cause for the problem. You should also have a detailed plan on how to tune the system, because applying random tuning tips often will not help and could make things worse.
This manual discusses how to find the reasons for performance problems and provides means to solve these problems. Before you start tuning your system, you should make sure you have ruled out common problems and have found the cause for the problem. You should also have a detailed plan on how to tune the system, because applying random tuning tips often will not help and could make things worse.
Specify the problem that needs to be solved.
In case the degradation is new, identify any recent changes to the system.
Identify why the issue is considered a performance problem.
Specify a metric that can be used to analyze performance. This metric could for example be latency, throughput, the maximum number of users that are simultaneously logged in, or the maximum number of active users.
Measure current performance using the metric from the previous step.
Identify the subsystem(s) where the application is spending the most time.
Monitor the system and/or the application.
Analyze the data, categorize where time is being spent.
Tune the subsystem identified in the previous step.
Remeasure the current performance without monitoring using the same metric as before.
If performance is still not acceptable, start over with Step 3.
Before starting to tuning a system, try to describe the problem as exactly as possible. A statement like “The system is slow!” is not a helpful problem description. For example, it could make a difference whether the system speed needs to be improved in general or only at peak times.
Furthermore, make sure you can apply a measurement to your problem, otherwise you cannot verify if the tuning was a success or not. You should always be able to compare “before” and “after”. Which metrics to use depends on the scenario or application you are looking into. Relevant Web server metrics, for example, could be expressed in terms of:
The time to deliver a page
Number of pages served per second or megabytes transferred per second
The maximum number of users that can be downloading pages while still receiving pages within an acceptable latency
A performance problem often is caused by network or hardware problems, bugs, or configuration issues. Make sure to rule out problems such as the ones listed below before attempting to tune your system:
Check the output of the systemd journal (see
Chapter 11, journalctl: Query the systemd Journal) for unusual entries.
Check (using top or ps) whether a
certain process misbehaves by eating up unusual amounts of CPU time or
memory.
Check for network problems by inspecting
/proc/net/dev.
In case of I/O problems with physical disks, make sure it is not caused
by hardware problems (check the disk with the
smartmontools) or by a full disk.
Ensure that background jobs are scheduled to be carried out in times
the server load is low. Those jobs should also run with low priority
(set via nice).
If the machine runs several services using the same resources, consider moving services to another server.
Last, make sure your software is up-to-date.
Finding the bottleneck very often is the hardest part when tuning a system. openSUSE Leap offers many tools to help you with this task. See Part II, “System Monitoring” for detailed information on general system monitoring applications and log file analysis. If the problem requires a long-time in-depth analysis, the Linux kernel offers means to perform such analysis. See Part III, “Kernel Monitoring” for coverage.
Once you have collected the data, it needs to be analyzed. First, inspect if the server's hardware (memory, CPU, bus) and its I/O capacities (disk, network) are sufficient. If these basic conditions are met, the system might benefit from tuning.
Make sure to carefully plan the tuning itself. It is of vital importance to only do one step at a time. Only by doing so you can measure if the change provided an improvement or even had a negative impact. Each tuning activity should be measured over a sufficient time period to ensure you can do an analysis based on significant data. If you cannot measure a positive effect, do not make the change permanent. Chances are, that it might have a negative effect in the future.
There are number of programs, tools, and utilities which you can use to examine the status of your system. This chapter introduces some and describes their most important and frequently used parameters.
System log file analysis is one of the most important tasks when analyzing the system. In fact, looking at the system log files should be the first thing to do when maintaining or troubleshooting a system. openSUSE Leap automatically logs almost everything that happens on the system in detail. Since…
For each of the described commands, examples of the relevant outputs are
presented. In the examples, the first line is the command itself (after
the tux > or root #). Omissions are indicated with
square brackets ([...]) and long lines are wrapped
where necessary. Line breaks for long lines are indicated by a backslash
(\).
tux > command -x -y
output line 1
output line 2
output line 3 is annoyingly long, so long that \
we need to break it
output line 4
[...]
output line 98
output line 99
The descriptions have been kept short so that we can include as many
utilities as possible. Further information for all the commands can be
found in the manual pages. Most of the commands also understand the
parameter --help, which produces a brief list of possible
parameters.
While most Linux system monitoring tools monitor only a single aspect of the system, there are a few tools with a broader scope. To get an overview and find out which part of the system to examine further, use these tools first.
vmstat #vmstat collects information about processes, memory, I/O, interrupts and CPU. If called without a sampling rate, it displays average values since the last reboot. When called with a sampling rate, it displays actual samples:
vmstat Output on a Lightly Used Machine #tux > vmstat 2
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 44264 81520 424 935736 0 0 12 25 27 34 1 0 98 0 0
0 0 44264 81552 424 935736 0 0 0 0 38 25 0 0 100 0 0
0 0 44264 81520 424 935732 0 0 0 0 23 15 0 0 100 0 0
0 0 44264 81520 424 935732 0 0 0 0 36 24 0 0 100 0 0
0 0 44264 81552 424 935732 0 0 0 0 51 38 0 0 100 0 0vmstat Output on a Heavily Used Machine (CPU bound) #tux > vmstat 2
procs -----------memory----------- ---swap-- -----io---- -system-- -----cpu------
r b swpd free buff cache si so bi bo in cs us sy id wa st
32 1 26236 459640 110240 6312648 0 0 9944 2 4552 6597 95 5 0 0 0
23 1 26236 396728 110336 6136224 0 0 9588 0 4468 6273 94 6 0 0 0
35 0 26236 554920 110508 6166508 0 0 7684 27992 4474 4700 95 5 0 0 0
28 0 26236 518184 110516 6039996 0 0 10830 4 4446 4670 94 6 0 0 0
21 5 26236 716468 110684 6074872 0 0 8734 20534 4512 4061 96 4 0 0 0The first line of the vmstat output always displays average values since the last reboot.
The columns show the following:
Shows the number of processes in a runnable state. These processes are either executing or waiting for a free CPU slot. If the number of processes in this column is constantly higher than the number of CPUs available, this may be an indication of insufficient CPU power.
Shows the number of processes waiting for a resource other than a CPU. A high number in this column may indicate an I/O problem (network or disk).
The amount of swap space (KB) currently used.
The amount of unused memory (KB).
Recently unused memory that can be reclaimed. This column is only
visible when calling vmstat with the parameter
-a (recommended).
Recently used memory that normally does not get reclaimed. This
column is only visible when calling vmstat with
the parameter -a (recommended).
File buffer cache (KB) in RAM that contains file system metadata. This
column is not visible when calling vmstat with
the parameter -a.
Page cache (KB) in RAM with the actual contents of files. This
column is not visible when calling vmstat with
the parameter -a.
Amount of data (KB) that is moved from swap to RAM
(si) or from RAM to swap (so)
per second. High so values over a long period of
time may indicate that an application is leaking memory and the
leaked memory is being swapped out. High si values
over a long period of time could mean that an application that was
inactive for a very long time is now active again. Combined high
si and so values for prolonged
periods of time are evidence of swap thrashing and may indicate that
more RAM needs to be installed in the system because there is not
enough memory to hold the working set size.
Number of blocks per second received from a block device (for example, a disk read). Note that swapping also impacts the values shown here. The block size may vary between file systems but can be determined using the stat utility. If throughput data is required then iostat may be used.
Number of blocks per second sent to a block device (for example, a disk write). Note that swapping also impacts the values shown here.
Interrupts per second. A high value may indicate a high I/O level
(network and/or disk), but could also be triggered for other reasons
such as inter-processor interrupts triggered by another activity.
Make sure to also check /proc/interrupts to
identify the source of interrupts.
Number of context switches per second. This is the number of times that the kernel replaces executable code of one program in memory with that of another program.
Percentage of CPU usage executing application code.
Percentage of CPU usage executing kernel code.
Percentage of CPU time spent idling. If this value is zero over a longer time, your CPU(s) are working to full capacity. This is not necessarily a bad sign—rather refer to the values in columns and to determine if your machine is equipped with sufficient CPU power.
If "wa" time is non-zero, it indicates throughput lost because of waiting for I/O. This may be inevitable, for example, if a file is being read for the first time, background writeback cannot keep up, and so on. It can also be an indicator for a hardware bottleneck (network or hard disk). Lastly, it can indicate a potential for tuning the virtual memory manager (refer to Chapter 14, Tuning the Memory Management Subsystem).
Percentage of CPU time stolen from a virtual machine.
See vmstat --help for more options.
dstat #
dstat is a replacement for tools such as
vmstat, iostat,
netstat, or ifstat.
dstat displays information about the system
resources in real time. For example, you can compare disk usage
in combination with interrupts from the IDE controller, or compare
network bandwidth with the disk throughput (in the same interval).
By default, its output is presented in readable tables. Alternatively, CSV output can be produced which is suitable as a spreadsheet import format.
It is written in Python and can be enhanced with plug-ins.
This is the general syntax:
dstat [-afv] [OPTIONS..] [DELAY [COUNT]]
All options and parameters are optional. Without any parameter, dstat
displays statistics about CPU (-c,
--cpu), disk (-d,
--disk), network (-n,
--net), paging (-g,
--page), and the interrupts and context switches of
the system (-y, --sys); it refreshes
the output every second ad infinitum:
root #dstatYou did not select any stats, using -cdngy by default. ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system-- usr sys idl wai hiq siq| read writ| recv send| in out | int csw 0 0 100 0 0 0| 15k 44k| 0 0 | 0 82B| 148 194 0 0 100 0 0 0| 0 0 |5430B 170B| 0 0 | 163 187 0 0 100 0 0 0| 0 0 |6363B 842B| 0 0 | 196 185
-a, --all
equal to -cdngy (default)
-f, --full
expand -C, -D,
-I, -N and -S
discovery lists
-v, --vmstat
equal to -pmgdsc, -D total
delay in seconds between each update
the number of updates to display before exiting
The default delay is 1 and the count is unspecified (unlimited).
For more information, see the man page of dstat and
its Web page at http://dag.wieers.com/home-made/dstat/.
sar #
sar can generate extensive reports on almost all
important system activities, among them CPU, memory, IRQ usage, IO, or
networking. It can also generate reports on the fly.
sar gathers all their data from the
/proc file system.
sar is a part of the sysstat
package either with YaST, or with zypper in
sysstat.
sar #
To generate reports on the fly, call sar with an
interval (seconds) and a count. To generate reports from files specify
a file name with the option -f instead of interval and
count. If file name, interval and count are not specified,
sar attempts to generate a report from
/var/log/sa/saDD, where
DD stands for the current day. This is the
default location to where sadc (the system
activity data collector) writes its data.
Query multiple files with multiple -f options.
sar 2 10 # on-the-fly report, 10 times every 2 seconds sar -f ~/reports/sar_2014_07_17 # queries file sar_2014_07_17 sar # queries file from today in /var/log/sa/ cd /var/log/sa && \ sar -f sa01 -f sa02 # queries files /var/log/sa/0[12]
Find examples for useful sar calls and their
interpretation below. For detailed information on the meaning of each
column, refer to the man (1) of
sar. Also refer to the man page for more options and
reports—sar offers plenty of them.
sar #
When called with no options, sar shows a basic
report about CPU usage. On multi-processor machines, results for all
CPUs are summarized. Use the option -P ALL to also
see statistics for individual CPUs.
root # sar 10 5
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (2 CPU)
17:51:29 CPU %user %nice %system %iowait %steal %idle
17:51:39 all 57,93 0,00 9,58 1,01 0,00 31,47
17:51:49 all 32,71 0,00 3,79 0,05 0,00 63,45
17:51:59 all 47,23 0,00 3,66 0,00 0,00 49,11
17:52:09 all 53,33 0,00 4,88 0,05 0,00 41,74
17:52:19 all 56,98 0,00 5,65 0,10 0,00 37,27
Average: all 49,62 0,00 5,51 0,24 0,00 44,62displays the percentage of time that the CPU was idle while waiting for an I/O request. If this value is significantly higher than zero over a longer time, there is a bottleneck in the I/O system (network or hard disk). If the value is zero over a longer time, your CPU is working at capacity.
sar -r #
Generate an overall picture of the system memory (RAM) by using the
option -r:
root # sar -r 10 5
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (2 CPU)
17:55:27 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
17:55:37 104232 1834624 94.62 20 627340 2677656 66.24 802052 828024 1744
17:55:47 98584 1840272 94.92 20 624536 2693936 66.65 808872 826932 2012
17:55:57 87088 1851768 95.51 20 605288 2706392 66.95 827260 821304 1588
17:56:07 86268 1852588 95.55 20 599240 2739224 67.77 829764 820888 3036
17:56:17 104260 1834596 94.62 20 599864 2730688 67.56 811284 821584 3164
Average: 96086 1842770 95.04 20 611254 2709579 67.03 815846 823746 2309The columns and show an approximation of the maximum amount of memory (RAM and swap) that the current workload could need. While displays the absolute number in kilobytes, displays a percentage.
sar -B #
Use the option -B to display the kernel paging
statistics.
root # sar -B 10 5
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (2 CPU)
18:23:01 pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
18:23:11 366.80 11.60 542.50 1.10 4354.80 0.00 0.00 0.00 0.00
18:23:21 0.00 333.30 1522.40 0.00 18132.40 0.00 0.00 0.00 0.00
18:23:31 47.20 127.40 1048.30 0.10 11887.30 0.00 0.00 0.00 0.00
18:23:41 46.40 2.50 336.10 0.10 7945.00 0.00 0.00 0.00 0.00
18:23:51 0.00 583.70 2037.20 0.00 17731.90 0.00 0.00 0.00 0.00
Average: 92.08 211.70 1097.30 0.26 12010.28 0.00 0.00 0.00 0.00The (major faults per second) column shows how many pages are loaded from disk into memory. The source of the faults may be file accesses or faults. At times, many major faults are normal. For example, during application start-up time. If major faults are experienced for the entire lifetime of the application it may be an indication that there is insufficient main memory, particularly if combined with large amounts of direct scanning (pgscand/s).
The column shows the number of pages scanned () in relation to the ones being reused from the main memory cache or the swap cache (). It is a measurement of the efficiency of page reclaim. Healthy values are either near 100 (every inactive page swapped out is being reused) or 0 (no pages have been scanned). The value should not drop below 30.
sar -d #
Use the option -d to display the block device (hard
disk, optical drive, USB storage device, etc.). Make sure to use the
additional option -p (pretty-print) to make the
column readable.
root # sar -d -p 10 5
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (2 CPU)
18:46:09 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
18:46:19 sda 1.70 33.60 0.00 19.76 0.00 0.47 0.47 0.08
18:46:19 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
18:46:19 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
18:46:29 sda 8.60 114.40 518.10 73.55 0.06 7.12 0.93 0.80
18:46:29 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
18:46:29 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
18:46:39 sda 40.50 3800.80 454.90 105.08 0.36 8.86 0.69 2.80
18:46:39 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
18:46:39 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
18:46:49 sda 1.40 0.00 204.90 146.36 0.00 0.29 0.29 0.04
18:46:49 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
18:46:49 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
18:46:59 sda 3.30 0.00 503.80 152.67 0.03 8.12 1.70 0.56
18:46:59 sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: sda 11.10 789.76 336.34 101.45 0.09 8.07 0.77 0.86
Average: sr0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Compare the values for , , and of all disks. Constantly high values in the and columns could be an indication that I/O subsystem is a bottleneck.
If the machine uses multiple disks, then it is best if I/O is interleaved evenly between disks of equal speed and capacity. It will be necessary to take into account whether the storage has multiple tiers. Furthermore, if there are multiple paths to storage then consider what the link saturation will be when balancing how storage is used.
sar -n KEYWORD #
The option -n lets you generate multiple network
related reports. Specify one of the following keywords along with the
-n:
DEV: Generates a statistic report for all network devices
EDEV: Generates an error statistics report for all network devices
NFS: Generates a statistic report for an NFS client
NFSD: Generates a statistic report for an NFS server
SOCK: Generates a statistic report on sockets
ALL: Generates all network statistic reports
sar Data #
sar reports are not always easy to parse for humans.
kSar, a Java application visualizing your sar data,
creates easy-to-read graphs. It can even generate PDF reports. kSar
takes data generated on the fly and past data from a file. kSar
is licensed under the BSD license and is available from
https://sourceforge.net/projects/ksar/.
iostat #
To monitor the system device load, use iostat. It
generates reports that can be useful for better balancing the load
between physical disks attached to your system.
To be able to use iostat, install the package
sysstat.
The first iostat report shows statistics collected
since the system was booted. Subsequent reports cover the time since the
previous report.
tux > iostat
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (4 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
17.68 4.49 4.24 0.29 0.00 73.31
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sdb 2.02 36.74 45.73 3544894 4412392
sda 1.05 5.12 13.47 493753 1300276
sdc 0.02 0.14 0.00 13641 37
Invoking iostat in this way will help you find out
whether throughput is different from your expectation, but not why.
Such questions can be better answered by an extended report which can be
generated by invoking iostat -x.
Extended reports additionally include, for example, information on average
queue sizes and average wait times.
It may also be easier to evaluate the data if idle block devices are
excluded using the -z switch.
Find definitions for each of the displayed column titles in the
man page of iostat (man 1 iostat).
You can also specify that a certain device should be monitored at specified
intervals.
For example, to generate five reports at three-second intervals for the
device sda, use:
tux >iostat-p sda 3 5
To show statistics of network file systems (NFS), there are two similar utilities:
nfsiostat-sysstat is included with the
package sysstat.
nfsiostat is included with the package
nfs-client.
mpstat #
The utility mpstat examines activities of each
available processor. If your system has one processor only, the global
average statistics will be reported.
The timing arguments work the same way as with the
iostat command. Entering mpstat 2
5 prints five reports for all processors in two-second
intervals.
root # mpstat 2 5
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (2 CPU)
13:51:10 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
13:51:12 all 8,27 0,00 0,50 0,00 0,00 0,00 0,00 0,00 0,00 91,23
13:51:14 all 46,62 0,00 3,01 0,00 0,00 0,25 0,00 0,00 0,00 50,13
13:51:16 all 54,71 0,00 3,82 0,00 0,00 0,51 0,00 0,00 0,00 40,97
13:51:18 all 78,77 0,00 5,12 0,00 0,00 0,77 0,00 0,00 0,00 15,35
13:51:20 all 51,65 0,00 4,30 0,00 0,00 0,51 0,00 0,00 0,00 43,54
Average: all 47,85 0,00 3,34 0,00 0,00 0,40 0,00 0,00 0,00 48,41
From the mpstat data, you can see:
The ratio between the and . For example, a ratio of 10:1 indicates the workload is mostly running application code and analysis should focus on the application. A ratio of 1:10 indicates the workload is mostly kernel-bound and tuning the kernel is worth considering. Alternatively, determine why the application is kernel-bound and see if that can be alleviated.
Whether there is a subset of CPUs that are nearly fully utilized even if the system is lightly loaded overall. Few hot CPUs can indicate that the workload is not parallelized and could benefit from executing on a machine with a smaller number of faster processors.
turbostat #
turbostat shows frequencies, load, temperature, and power
of AMD64/Intel 64 processors. It can operate in two modes: If called
with a command, the command process is forked and statistics are displayed
upon command completion. When run without a command, it will display
updated statistics every five seconds. Note that
turbostat requires the kernel module
msr to be loaded.
tux >sudoturbostat find /etc -type d -exec true {} \; 0.546880 sec CPU Avg_MHz Busy% Bzy_MHz TSC_MHz - 416 28.43 1465 3215 0 631 37.29 1691 3215 1 416 27.14 1534 3215 2 270 24.30 1113 3215 3 406 26.57 1530 3214 4 505 32.46 1556 3214 5 270 22.79 1184 3214
The output depends on the CPU type and may vary. To display more details
such as temperature and power, use the --debug option. For
more command line options and an explanation of the field descriptions,
refer to man 8 turbostat.
pidstat #
If you need to see what load a particular task applies to your system,
use pidstat command. It prints activity of every
selected task or all tasks managed by Linux kernel if no task is
specified. You can also set the number of reports to be displayed and
the time interval between them.
For example, pidstat -C firefox 2 3
prints the load statistic for tasks whose command name includes the
string “firefox”. There will be three reports printed at
two second intervals.
root # pidstat -C firefox 2 3
Linux 4.4.21-64-default (jupiter) 10/12/16 _x86_64_ (2 CPU)
14:09:11 UID PID %usr %system %guest %CPU CPU Command
14:09:13 1000 387 22,77 0,99 0,00 23,76 1 firefox
14:09:13 UID PID %usr %system %guest %CPU CPU Command
14:09:15 1000 387 46,50 3,00 0,00 49,50 1 firefox
14:09:15 UID PID %usr %system %guest %CPU CPU Command
14:09:17 1000 387 60,50 7,00 0,00 67,50 1 firefox
Average: UID PID %usr %system %guest %CPU CPU Command
Average: 1000 387 43,19 3,65 0,00 46,84 - firefox
Similarly, pidstat -d can be
used to estimate how much I/O tasks are doing, whether they are
sleeping on that I/O and how many clock ticks the task was stalled.
dmesg #
The Linux kernel keeps certain messages in a ring buffer. To view these
messages, enter the command dmesg -T.
Older events are logged in the systemd journal. See
Chapter 11, journalctl: Query the systemd Journal for more information on the journal.
lsof #
To view a list of all the files open for the process with process ID
PID, use -p. For example, to
view all the files used by the current shell, enter:
root # lsof -p $$
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 8842 root cwd DIR 0,32 222 6772 /root
bash 8842 root rtd DIR 0,32 166 256 /
bash 8842 root txt REG 0,32 656584 31066 /bin/bash
bash 8842 root mem REG 0,32 1978832 22993 /lib64/libc-2.19.so
[...]
bash 8842 root 2u CHR 136,2 0t0 5 /dev/pts/2
bash 8842 root 255u CHR 136,2 0t0 5 /dev/pts/2
The special shell variable $$, whose value is the
process ID of the shell, has been used.
When used with -i, lsof lists
currently open Internet files as well:
root # lsof -i
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
wickedd-d 917 root 8u IPv4 16627 0t0 UDP *:bootpc
wickedd-d 918 root 8u IPv6 20752 0t0 UDP [fe80::5054:ff:fe72:5ead]:dhcpv6-client
sshd 3152 root 3u IPv4 18618 0t0 TCP *:ssh (LISTEN)
sshd 3152 root 4u IPv6 18620 0t0 TCP *:ssh (LISTEN)
master 4746 root 13u IPv4 20588 0t0 TCP localhost:smtp (LISTEN)
master 4746 root 14u IPv6 20589 0t0 TCP localhost:smtp (LISTEN)
sshd 8837 root 5u IPv4 293709 0t0 TCP jupiter.suse.de:ssh->venus.suse.de:33619 (ESTABLISHED)
sshd 8837 root 9u IPv6 294830 0t0 TCP localhost:x11 (LISTEN)
sshd 8837 root 10u IPv4 294831 0t0 TCP localhost:x11 (LISTEN)udevadm monitor #
udevadm monitor listens to the kernel uevents and
events sent out by a udev rule and prints the device path (DEVPATH) of
the event to the console. This is a sequence of events while connecting
a USB memory stick:
Only root user is allowed to monitor udev events by running the
udevadm command.
UEVENT[1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2 UEVENT[1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2 UEVENT[1138806687] add@/class/scsi_host/host4 UEVENT[1138806687] add@/class/usb_device/usbdev4.10 UDEV [1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2 UDEV [1138806687] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2 UDEV [1138806687] add@/class/scsi_host/host4 UDEV [1138806687] add@/class/usb_device/usbdev4.10 UEVENT[1138806692] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2 UEVENT[1138806692] add@/block/sdb UEVENT[1138806692] add@/class/scsi_generic/sg1 UEVENT[1138806692] add@/class/scsi_device/4:0:0:0 UDEV [1138806693] add@/devices/pci0000:00/0000:00:1d.7/usb4/4-2/4-2.2/4-2.2 UDEV [1138806693] add@/class/scsi_generic/sg1 UDEV [1138806693] add@/class/scsi_device/4:0:0:0 UDEV [1138806693] add@/block/sdb UEVENT[1138806694] add@/block/sdb/sdb1 UDEV [1138806694] add@/block/sdb/sdb1 UEVENT[1138806694] mount@/block/sdb/sdb1 UEVENT[1138806697] umount@/block/sdb/sdb1
ipcs #
The command ipcs produces a list of the IPC resources
currently in use:
root # ipcs
------ Message Queues --------
key msqid owner perms used-bytes messages
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 65536 tux 600 524288 2 dest
0x00000000 98305 tux 600 4194304 2 dest
0x00000000 884738 root 600 524288 2 dest
0x00000000 786435 tux 600 4194304 2 dest
0x00000000 12058628 tux 600 524288 2 dest
0x00000000 917509 root 600 524288 2 dest
0x00000000 12353542 tux 600 196608 2 dest
0x00000000 12451847 tux 600 524288 2 dest
0x00000000 11567114 root 600 262144 1 dest
0x00000000 10911763 tux 600 2097152 2 dest
0x00000000 11665429 root 600 2336768 2 dest
0x00000000 11698198 root 600 196608 2 dest
0x00000000 11730967 root 600 524288 2 dest
------ Semaphore Arrays --------
key semid owner perms nsems
0xa12e0919 32768 tux 666 2ps #
The command ps produces a list of processes. Most
parameters must be written without a minus sign. Refer to ps
--help for a brief help or to the man page for extensive help.
To list all processes with user and command line information, use
ps axu:
tux > ps axu
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.3 34376 4608 ? Ss Jul24 0:02 /usr/lib/systemd/systemd
root 2 0.0 0.0 0 0 ? S Jul24 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? S Jul24 0:00 [ksoftirqd/0]
root 5 0.0 0.0 0 0 ? S< Jul24 0:00 [kworker/0:0H]
root 6 0.0 0.0 0 0 ? S Jul24 0:00 [kworker/u2:0]
root 7 0.0 0.0 0 0 ? S Jul24 0:00 [migration/0]
[...]
tux 12583 0.0 0.1 185980 2720 ? Sl 10:12 0:00 /usr/lib/gvfs/gvfs-mtp-volume-monitor
tux 12587 0.0 0.1 198132 3044 ? Sl 10:12 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
tux 12591 0.0 0.1 181940 2700 ? Sl 10:12 0:00 /usr/lib/gvfs/gvfs-goa-volume-monitor
tux 12594 8.1 10.6 1418216 163564 ? Sl 10:12 0:03 /usr/bin/gnome-shell
tux 12600 0.0 0.3 393448 5972 ? Sl 10:12 0:00 /usr/lib/gnome-settings-daemon-3.0/gsd-printer
tux 12625 0.0 0.6 227776 10112 ? Sl 10:12 0:00 /usr/lib/gnome-control-center-search-provider
tux 12626 0.5 1.5 890972 23540 ? Sl 10:12 0:00 /usr/bin/nautilus --no-default-window
[...]
To check how many sshd processes are running, use the
option -p together with the command
pidof, which lists the process IDs of the given
processes.
tux > ps -p $(pidof sshd)
PID TTY STAT TIME COMMAND
1545 ? Ss 0:00 /usr/sbin/sshd -D
4608 ? Ss 0:00 sshd: root@pts/0
The process list can be formatted according to your needs. The option
L returns a list of all keywords. Enter the following
command to issue a list of all processes sorted by memory usage:
tux > ps ax --format pid,rss,cmd --sort rss
PID RSS CMD
PID RSS CMD
2 0 [kthreadd]
3 0 [ksoftirqd/0]
4 0 [kworker/0:0]
5 0 [kworker/0:0H]
6 0 [kworker/u2:0]
7 0 [migration/0]
8 0 [rcu_bh]
[...]
12518 22996 /usr/lib/gnome-settings-daemon-3.0/gnome-settings-daemon
12626 23540 /usr/bin/nautilus --no-default-window
12305 32188 /usr/bin/Xorg :0 -background none -verbose
12594 164900 /usr/bin/gnome-shellps Calls #ps aux--sort
COLUMN
Sort the output by COLUMN. Replace COLUMN with
pmem for physical memory ratio |
pcpu for CPU ratio |
rss for resident set size (non-swapped physical
memory) |
ps axo pid,%cpu,rss,vsz,args,wchan
Shows every process, their PID, CPU usage ratio, memory size (resident and virtual), name, and their syscall.
ps axfo pid,args
Show a process tree.
pstree #
The command pstree produces a list of processes in
the form of a tree:
tux > pstree
systemd---accounts-daemon---{gdbus}
| |-{gmain}
|-at-spi-bus-laun---dbus-daemon
| |-{dconf worker}
| |-{gdbus}
| |-{gmain}
|-at-spi2-registr---{gdbus}
|-cron
|-2*[dbus-daemon]
|-dbus-launch
|-dconf-service---{gdbus}
| |-{gmain}
|-gconfd-2
|-gdm---gdm-simple-slav---Xorg
| | |-gdm-session-wor---gnome-session---gnome-setti+
| | | | |-gnome-shell+++
| | | | |-{dconf work+
| | | | |-{gdbus}
| | | | |-{gmain}
| | | |-{gdbus}
| | | |-{gmain}
| | |-{gdbus}
| | |-{gmain}
| |-{gdbus}
| |-{gmain}
[...]
The parameter -p adds the process ID to a given name.
To have the command lines displayed as well, use the -a
parameter:
top #
The command top (an abbreviation of “table of
processes”) displays a list of processes that is refreshed every
two seconds. To terminate the program, press Q. The
parameter -n 1 terminates the program after a single
display of the process list. The following is an example output of the
command top -n 1:
tux > top -n 1
Tasks: 128 total, 1 running, 127 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.4 us, 1.2 sy, 0.0 ni, 96.3 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 1535508 total, 699948 used, 835560 free, 880 buffers
KiB Swap: 1541116 total, 0 used, 1541116 free. 377000 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 116292 4660 2028 S 0.000 0.303 0:04.45 systemd
2 root 20 0 0 0 0 S 0.000 0.000 0:00.00 kthreadd
3 root 20 0 0 0 0 S 0.000 0.000 0:00.07 ksoftirqd+
5 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 kworker/0+
6 root 20 0 0 0 0 S 0.000 0.000 0:00.00 kworker/u+
7 root rt 0 0 0 0 S 0.000 0.000 0:00.00 migration+
8 root 20 0 0 0 0 S 0.000 0.000 0:00.00 rcu_bh
9 root 20 0 0 0 0 S 0.000 0.000 0:00.24 rcu_sched
10 root rt 0 0 0 0 S 0.000 0.000 0:00.01 watchdog/0
11 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 khelper
12 root 20 0 0 0 0 S 0.000 0.000 0:00.00 kdevtmpfs
13 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 netns
14 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 writeback
15 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 kintegrit+
16 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 bioset
17 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 crypto
18 root 0 -20 0 0 0 S 0.000 0.000 0:00.00 kblockdBy default the output is sorted by CPU usage (column , shortcut Shift–P). Use the following key combinations to change the sort field:
| Shift–M: Resident Memory () |
| Shift–N: Process ID () |
| Shift–T: Time () |
To use any other field for sorting, press F and select a field from the list. To toggle the sort order, Use Shift–R.
The parameter -U UID
monitors only the processes associated with a particular user. Replace
UID with the user ID of the user. Use
top -U $(id -u) to show processes of the current user
iotop #
The iotop utility displays a table of I/O usage by
processes or threads.
iotop
iotop is not installed by default. You need to
install it manually with zypper in iotop as
root.
iotop displays columns for the I/O bandwidth read and
written by each process during the sampling period. It also displays the
percentage of time the process spent while swapping in and while waiting
on I/O. For each process, its I/O priority (class/level) is shown. In
addition, the total I/O bandwidth read and written during the sampling
period is displayed at the top of the interface.
The ← and → keys change the sorting.
R reverses the sort order.
O toggles between showing all processes and threads
(default view) and showing only those doing I/O. (This function is
similar to adding --only on command line.)
P toggles between showing threads (default view) and
processes. (This function is similar to --only.)
A toggles between showing the current I/O bandwidth
(default view) and accumulated I/O operations since
iotop was started. (This function is similar to
--accumulated.)
I lets you change the priority of a thread or a process's threads.
Q quits iotop.
Pressing any other key will force a refresh.
Following is an example output of the command iotop
--only, while find and
emacs are running:
root # iotop --only
Total DISK READ: 50.61 K/s | Total DISK WRITE: 11.68 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
3416 be/4 tux 50.61 K/s 0.00 B/s 0.00 % 4.05 % find /
275 be/3 root 0.00 B/s 3.89 K/s 0.00 % 2.34 % [jbd2/sda2-8]
5055 be/4 tux 0.00 B/s 3.89 K/s 0.00 % 0.04 % emacs
iotop can be also used in a batch mode
(-b) and its output stored in a file for later
analysis. For a complete set of options, see the manual page
(man 8 iotop).
nice and renice #
The kernel determines which processes require more CPU time than others
by the process's nice level, also called niceness. The higher the
“nice” level of a process is, the less CPU time it will
take from other processes. Nice levels range from -20 (the least
“nice” level) to 19. Negative values can only be set by
root.
Adjusting the niceness level is useful when running a non time-critical process that lasts long and uses large amounts of CPU time. For example, compiling a kernel on a system that also performs other tasks. Making such a process “nicer”, ensures that the other tasks, for example a Web server, will have a higher priority.
Calling nice without any parameters prints the
current niceness:
tux > nice
0
Running nice COMMAND
increments the current nice level for the given command by 10. Using
nice -n
LEVEL
COMMAND lets you specify a new niceness
relative to the current one.
To change the niceness of a running process, use
renice PRIORITY -p
PROCESS_ID, for example:
tux > renice +5 3266
To renice all processes owned by a specific user, use the option
-u USER.
Process groups are reniced by the option -g PROCESS_GROUP_ID.
free #
The utility free examines RAM and swap usage. Details
of both free and used memory and swap areas are shown:
tux > free
total used free shared buffers cached
Mem: 32900500 32703448 197052 0 255668 5787364
-/+ buffers/cache: 26660416 6240084
Swap: 2046972 304680 1742292
The options -b, -k,
-m, -g show the output in bytes, KB,
MB, or GB, respectively. The parameter -s delay ensures
that the display is refreshed every DELAY
seconds. For example, free -s 1.5 produces an update
every 1.5 seconds.
/proc/meminfo #
Use /proc/meminfo to get more detailed information
on memory usage than with free. Actually
free uses some data from this file. See an
example output from a 64-bit system below. Note that it slightly differs
on 32-bit systems because of different memory management:
MemTotal: 1942636 kB MemFree: 1294352 kB MemAvailable: 1458744 kB Buffers: 876 kB Cached: 278476 kB SwapCached: 0 kB Active: 368328 kB Inactive: 199368 kB Active(anon): 288968 kB Inactive(anon): 10568 kB Active(file): 79360 kB Inactive(file): 188800 kB Unevictable: 80 kB Mlocked: 80 kB SwapTotal: 2103292 kB SwapFree: 2103292 kB Dirty: 44 kB Writeback: 0 kB AnonPages: 288592 kB Mapped: 70444 kB Shmem: 11192 kB Slab: 40916 kB SReclaimable: 17712 kB SUnreclaim: 23204 kB KernelStack: 2000 kB PageTables: 10996 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3074608 kB Committed_AS: 1407208 kB VmallocTotal: 34359738367 kB VmallocUsed: 145996 kB VmallocChunk: 34359588844 kB HardwareCorrupted: 0 kB AnonHugePages: 86016 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 79744 kB DirectMap2M: 2017280 kB
These entries stand for the following:
Total amount of RAM.
Amount of unused RAM.
Estimate of how much memory is available for starting new applications without swapping.
File buffer cache in RAM containing file system metadata.
Page cache in RAM. This excludes buffer cache and swap cache, but includes memory.
Page cache for swapped-out memory.
Recently used memory that will not be reclaimed unless necessary or on explicit request. is the sum of and :
tracks swap-backed memory. This includes private and shared anonymous mappings and private file pages after copy-on-write.
tracks other file system backed memory.
Less recently used memory that will usually be reclaimed first. is the sum of and :
tracks swap backed memory. This includes private and shared anonymous mappings and private file pages after copy-on-write.
tracks other file system backed memory.
Amount of memory that cannot be reclaimed (for example, because it is or used as a RAM disk).
Amount of memory that is backed by the
mlock system call.
mlock allows processes to define which part of
physical RAM their virtual memory should be mapped to.
However, mlock does not guarantee this
placement.
Amount of swap space.
Amount of unused swap space.
Amount of memory waiting to be written to disk, because it contains
changes compared to the backing storage. Dirty data can be explicitly
synchronized either by the application or by the kernel after a short
delay. A large amount of dirty data may take considerable time to write
to disk resulting in stalls. The total amount of dirty data that can
exist at any time can be controlled with the
sysctl parameters vm.dirty_ratio
or vm.dirty_bytes (refer to Section 14.1.5, “Writeback” for more details).
Amount of memory that is currently being written to disk.
Memory claimed with the mmap system call.
Memory shared between groups of processes, such as IPC data,
tmpfs data, and shared anonymous memory.
Memory allocation for internal data structures of the kernel.
Slab section that can be reclaimed, such as caches (inode, dentry, etc.).
Slab section that cannot be reclaimed.
Amount of kernel space memory used by applications (through system calls).
Amount of memory dedicated to page tables of all processes.
NFS pages that have already been sent to the server, but are not yet committed there.
Memory used for bounce buffers of block devices.
Memory used by FUSE for temporary writeback buffers.
Amount of memory available to the system based on the overcommit ratio setting. This is only enforced if strict overcommit accounting is enabled.
An approximation of the total amount of memory (RAM and swap) that the current workload would need in the worst case.
Amount of allocated kernel virtual address space.
Amount of used kernel virtual address space.
The largest contiguous block of available kernel virtual address space.
Amount of failed memory (can only be detected when using ECC RAM).
Anonymous hugepages that are mapped into user space page tables. These are allocated transparently for processes without being specifically requested, therefore they are also known as transparent hugepages (THP).
Number of preallocated hugepages for use by
SHM_HUGETLB and
MAP_HUGETLB or through the
hugetlbfs file system, as defined in
/proc/sys/vm/nr_hugepages.
Number of hugepages available.
Number of hugepages that are committed.
Number of hugepages available beyond
(“surplus”), as defined
in /proc/sys/vm/nr_overcommit_hugepages.
Size of a hugepage—on AMD64/Intel 64 the default is 2048 KB.
Amount of kernel memory that is mapped to pages with a given size (in the example: 4 kB).
Exactly determining how much memory a certain process is consuming is
not possible with standard tools like top or
ps. Use the smaps subsystem, introduced in kernel
2.6.14, if you need exact data. It can be found at
/proc/PID/smaps and
shows you the number of clean and dirty memory pages the process with
the ID PID is using at that time. It
differentiates between shared and private memory, so you can see
how much memory the process is using without including memory shared
with other processes. For more information see
/usr/src/linux/Documentation/filesystems/proc.txt
(requires the package
kernel-source to be
installed).
smaps is expensive to read. Therefore it is not recommended to monitor it regularly, but only when closely monitoring a certain process.
In case the network bandwidth is lower than expected, you should first check if any traffic shaping rules are active for your network segment.
ip #
ip is a powerful tool to set up and control network
interfaces. You can also use it to quickly view basic statistics about
network interfaces of the system. For example, whether the interface is
up or how many errors, dropped packets, or packet collisions there are.
If you run ip with no additional parameter, it
displays a help output. To list all network interfaces, enter
ip addr show (or abbreviated as ip
a). ip addr show up lists only running
network interfaces. ip -s link show
DEVICE lists statistics for the specified
interface only:
root # ip -s link show br0
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
link/ether 00:19:d1:72:d4:30 brd ff:ff:ff:ff:ff:ff
RX: bytes packets errors dropped overrun mcast
6346104756 9265517 0 10860 0 0
TX: bytes packets errors dropped carrier collsns
3996204683 3655523 0 0 0 0
ip can also show interfaces
(link), routing tables (route), and
much more—refer to man 8 ip for details.
root # ip route
default via 192.168.2.1 dev eth1
192.168.2.0/24 dev eth0 proto kernel scope link src 192.168.2.100
192.168.2.0/24 dev eth1 proto kernel scope link src 192.168.2.101
192.168.2.0/24 dev eth2 proto kernel scope link src 192.168.2.102root # ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:44:30:51 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:a3:c1:fb brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:32:a4:09 brd ff:ff:ff:ff:ff:ffnethogs
#
In some cases, for example if the network traffic suddenly becomes very
high, it is desirable to quickly find out which application(s) is/are
causing the traffic. nethogs, a tool with a design
similar to top, shows incoming and outgoing traffic for
all relevant processes:
PID USER PROGRAM DEV SENT RECEIVED 27145 root zypper eth0 5.719 391.749 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30015 0.102 2.326 KB/sec 26635 tux /usr/lib64/firefox/firefox eth0 0.026 0.026 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30045 0.000 0.021 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30045 0.000 0.018 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30015 0.000 0.018 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30045 0.000 0.017 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30045 0.000 0.017 KB/sec ? root ..0:113:80c0:8080:10:160:0:100:30045 0.069 0.000 KB/sec ? root unknown TCP 0.000 0.000 KB/sec TOTAL 5.916 394.192 KB/sec
Like in top, nethogs features
interactive commands:
| M: cycle between display modes (kb/s, kb, b, mb) |
| R: sort by |
| S: sort by |
| Q: quit |
ethtool can display and change detailed aspects of
your Ethernet network device. By default it prints the current setting
of the specified device.
root # ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
[...]
Link detected: yes
The following table shows ethtool options that you
can use to query the device for specific information:
ethtool #|
|
it queries the device for |
|---|---|
|
-a |
pause parameter information |
|
-c |
interrupt coalescing information |
|
-g |
Rx/Tx (receive/transmit) ring parameter information |
|
-i |
associated driver information |
|
-k |
offload information |
|
-S |
NIC and driver-specific statistics |
ss #
ss is a tool to dump socket statistics and replaces
the netstat command. To list all
connections use ss without parameters:
root # ss
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
u_str ESTAB 0 0 * 14082 * 14083
u_str ESTAB 0 0 * 18582 * 18583
u_str ESTAB 0 0 * 19449 * 19450
u_str ESTAB 0 0 @/tmp/dbus-gmUUwXABPV 18784 * 18783
u_str ESTAB 0 0 /var/run/dbus/system_bus_socket 19383 * 19382
u_str ESTAB 0 0 @/tmp/dbus-gmUUwXABPV 18617 * 18616
u_str ESTAB 0 0 @/tmp/dbus-58TPPDv8qv 19352 * 19351
u_str ESTAB 0 0 * 17658 * 17657
u_str ESTAB 0 0 * 17693 * 17694
[..]To show all network ports currently open, use the following command:
root # ss -l
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
nl UNCONN 0 0 rtnl:4195117 *
nl UNCONN 0 0 rtnl:wickedd-auto4/811 *
nl UNCONN 0 0 rtnl:wickedd-dhcp4/813 *
nl UNCONN 0 0 rtnl:4195121 *
nl UNCONN 0 0 rtnl:4195115 *
nl UNCONN 0 0 rtnl:wickedd-dhcp6/814 *
nl UNCONN 0 0 rtnl:kernel *
nl UNCONN 0 0 rtnl:wickedd/817 *
nl UNCONN 0 0 rtnl:4195118 *
nl UNCONN 0 0 rtnl:nscd/706 *
nl UNCONN 4352 0 tcpdiag:ss/2381 *
[...]
When displaying network connections, you can specify the socket type to
display: TCP (-t) or UDP (-u) for
example. The -p option shows the PID and name of the
program to which each socket belongs.
The following example lists all TCP connections and the programs using
these connections. The -a option make sure all
established connections (listening and non-listening) are shown. The
-p option shows the PID and name of the program to
which each socket belongs.
root # ss -t -a -p
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:ssh *:* users:(("sshd",1551,3))
LISTEN 0 100 127.0.0.1:smtp *:* users:(("master",1704,13))
ESTAB 0 132 10.120.65.198:ssh 10.120.4.150:55715 users:(("sshd",2103,5))
LISTEN 0 128 :::ssh :::* users:(("sshd",1551,4))
LISTEN 0 100 ::1:smtp :::* users:(("master",1704,14))/proc File System #
The /proc file system is a pseudo file system in
which the kernel reserves important information in the form of virtual
files. For example, display the CPU type with this command:
tux > cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 30
model name : Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz
stepping : 5
microcode : 0x6
cpu MHz : 1197.000
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 5333.85
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
[...]
Detailed information about the processor on the AMD64/Intel 64 architecture is
also available by running x86info.
Query the allocation and use of interrupts with the following command:
tux > cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 121 0 0 0 IO-APIC-edge timer
8: 0 0 0 1 IO-APIC-edge rtc0
9: 0 0 0 0 IO-APIC-fasteoi acpi
16: 0 11933 0 0 IO-APIC-fasteoi ehci_hcd:+
18: 0 0 0 0 IO-APIC-fasteoi i801_smbus
19: 0 117978 0 0 IO-APIC-fasteoi ata_piix,+
22: 0 0 3275185 0 IO-APIC-fasteoi enp5s1
23: 417927 0 0 0 IO-APIC-fasteoi ehci_hcd:+
40: 2727916 0 0 0 HPET_MSI-edge hpet2
41: 0 2749134 0 0 HPET_MSI-edge hpet3
42: 0 0 2759148 0 HPET_MSI-edge hpet4
43: 0 0 0 2678206 HPET_MSI-edge hpet5
45: 0 0 0 0 PCI-MSI-edge aerdrv, P+
46: 0 0 0 0 PCI-MSI-edge PCIe PME,+
47: 0 0 0 0 PCI-MSI-edge PCIe PME,+
48: 0 0 0 0 PCI-MSI-edge PCIe PME,+
49: 0 0 0 387 PCI-MSI-edge snd_hda_i+
50: 933117 0 0 0 PCI-MSI-edge nvidia
NMI: 2102 2023 2031 1920 Non-maskable interrupts
LOC: 92 71 57 41 Local timer interrupts
SPU: 0 0 0 0 Spurious interrupts
PMI: 2102 2023 2031 1920 Performance monitoring int+
IWI: 47331 45725 52464 46775 IRQ work interrupts
RTR: 2 0 0 0 APIC ICR read retries
RES: 472911 396463 339792 323820 Rescheduling interrupts
CAL: 48389 47345 54113 50478 Function call interrupts
TLB: 28410 26804 24389 26157 TLB shootdowns
TRM: 0 0 0 0 Thermal event interrupts
THR: 0 0 0 0 Threshold APIC interrupts
MCE: 0 0 0 0 Machine check exceptions
MCP: 40 40 40 40 Machine check polls
ERR: 0
MIS: 0
The address assignment of executables and libraries is contained in the
maps file:
tux > cat /proc/self/maps
08048000-0804c000 r-xp 00000000 03:03 17753 /bin/cat
0804c000-0804d000 rw-p 00004000 03:03 17753 /bin/cat
0804d000-0806e000 rw-p 0804d000 00:00 0 [heap]
b7d27000-b7d5a000 r--p 00000000 03:03 11867 /usr/lib/locale/en_GB.utf8/
b7d5a000-b7e32000 r--p 00000000 03:03 11868 /usr/lib/locale/en_GB.utf8/
b7e32000-b7e33000 rw-p b7e32000 00:00 0
b7e33000-b7f45000 r-xp 00000000 03:03 8837 /lib/libc-2.3.6.so
b7f45000-b7f46000 r--p 00112000 03:03 8837 /lib/libc-2.3.6.so
b7f46000-b7f48000 rw-p 00113000 03:03 8837 /lib/libc-2.3.6.so
b7f48000-b7f4c000 rw-p b7f48000 00:00 0
b7f52000-b7f53000 r--p 00000000 03:03 11842 /usr/lib/locale/en_GB.utf8/
[...]
b7f5b000-b7f61000 r--s 00000000 03:03 9109 /usr/lib/gconv/gconv-module
b7f61000-b7f62000 r--p 00000000 03:03 9720 /usr/lib/locale/en_GB.utf8/
b7f62000-b7f76000 r-xp 00000000 03:03 8828 /lib/ld-2.3.6.so
b7f76000-b7f78000 rw-p 00013000 03:03 8828 /lib/ld-2.3.6.so
bfd61000-bfd76000 rw-p bfd61000 00:00 0 [stack]
ffffe000-fffff000 ---p 00000000 00:00 0 [vdso]A lot more information can be obtained from the /proc file system. Some important files and their contents are:
/proc/devices
Available devices
/proc/modules
Kernel modules loaded
/proc/cmdline
Kernel command line
/proc/meminfo
Detailed information about memory usage
/proc/config.gz
gzip-compressed configuration file of the kernel
currently running
Find information about processes currently running in the
/proc/NNN directories,
where NNN is the process ID (PID) of the
relevant process. Every process can find its own characteristics in
/proc/self/.
Further information is available in the text file
/usr/src/linux/Documentation/filesystems/proc.txt
(this file is available when the package
kernel-source is installed).
procinfo #
Important information from the /proc file system is
summarized by the command procinfo:
tux > procinfo
Linux 3.11.10-17-desktop (geeko@buildhost) (gcc 4.8.1 20130909) #1 4CPU [jupiter.example.com]
Memory: Total Used Free Shared Buffers Cached
Mem: 8181908 8000632 181276 0 85472 2850872
Swap: 10481660 1576 10480084
Bootup: Mon Jul 28 09:54:13 2014 Load average: 1.61 0.85 0.74 2/904 25949
user : 1:54:41.84 12.7% page in : 2107312 disk 1: 52212r 20199w
nice : 0:00:00.46 0.0% page out: 1714461 disk 2: 19387r 10928w
system: 0:25:38.00 2.8% page act: 466673 disk 3: 548r 10w
IOwait: 0:04:16.45 0.4% page dea: 272297
hw irq: 0:00:00.42 0.0% page flt: 105754526
sw irq: 0:01:26.48 0.1% swap in : 0
idle : 12:14:43.65 81.5% swap out: 394
guest : 0:02:18.59 0.2%
uptime: 3:45:22.24 context : 99809844
irq 0: 121 timer irq 41: 3238224 hpet3
irq 8: 1 rtc0 irq 42: 3251898 hpet4
irq 9: 0 acpi irq 43: 3156368 hpet5
irq 16: 14589 ehci_hcd:usb1 irq 45: 0 aerdrv, PCIe PME
irq 18: 0 i801_smbus irq 46: 0 PCIe PME, pciehp
irq 19: 124861 ata_piix, ata_piix, f irq 47: 0 PCIe PME, pciehp
irq 22: 3742817 enp5s1 irq 48: 0 PCIe PME, pciehp
irq 23: 479248 ehci_hcd:usb2 irq 49: 387 snd_hda_intel
irq 40: 3216894 hpet2 irq 50: 1088673 nvidia
To see all the information, use the parameter -a. The
parameter -nN produces updates of the information every
N seconds. In this case, terminate the
program by pressing Q.
By default, the cumulative values are displayed. The parameter
-d produces the differential values. procinfo
-dn5 displays the values that have changed in the last five
seconds:
/proc/sys/ #
System control parameters are used to modify the Linux kernel parameters
at runtime. They reside in /proc/sys/ and can be
viewed and modified with the sysctl command. To list
all parameters, run sysctl -a. A
single parameter can be listed with sysctl
PARAMETER_NAME.
Parameters are grouped into categories and can be listed with
sysctl CATEGORY or by
listing the contents of the respective directories. The most important
categories are listed below. The links to further readings require the
installation of the package
kernel-source.
sysctl dev (/proc/sys/dev/)Device-specific information.
sysctl fs (/proc/sys/fs/)
Used file handles, quotas, and other file system-oriented parameters.
For details see
/usr/src/linux/Documentation/sysctl/fs.txt.
sysctl kernel (/proc/sys/kernel/)
Information about the task scheduler, system shared memory, and other
kernel-related parameters. For details see
/usr/src/linux/Documentation/sysctl/kernel.txt
sysctl net (/proc/sys/net/)
Information about network bridges, and general network parameters
(mainly the ipv4/ subdirectory). For details see
/usr/src/linux/Documentation/sysctl/net.txt
sysctl vm (/proc/sys/vm/)
Entries in this path relate to information about the virtual memory,
swapping, and caching. For details see
/usr/src/linux/Documentation/sysctl/vm.txt
To set or change a parameter for the current session, use the command
sysctl -w
PARAMETER=VALUE.
To permanently change a setting, add a line
PARAMETER=VALUE to
/etc/sysctl.conf.
lspci #Most operating systems require root user privileges to grant access to the computer's PCI configuration.
The command lspci lists the PCI resources:
root # lspci
00:00.0 Host bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE \
DRAM Controller/Host-Hub Interface (rev 01)
00:01.0 PCI bridge: Intel Corporation 82845G/GL[Brookdale-G]/GE/PE \
Host-to-AGP Bridge (rev 01)
00:1d.0 USB Controller: Intel Corporation 82801DB/DBL/DBM \
(ICH4/ICH4-L/ICH4-M) USB UHCI Controller #1 (rev 01)
00:1d.1 USB Controller: Intel Corporation 82801DB/DBL/DBM \
(ICH4/ICH4-L/ICH4-M) USB UHCI Controller #2 (rev 01)
00:1d.2 USB Controller: Intel Corporation 82801DB/DBL/DBM \
(ICH4/ICH4-L/ICH4-M) USB UHCI Controller #3 (rev 01)
00:1d.7 USB Controller: Intel Corporation 82801DB/DBM \
(ICH4/ICH4-M) USB2 EHCI Controller (rev 01)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 81)
00:1f.0 ISA bridge: Intel Corporation 82801DB/DBL (ICH4/ICH4-L) \
LPC Interface Bridge (rev 01)
00:1f.1 IDE interface: Intel Corporation 82801DB (ICH4) IDE \
Controller (rev 01)
00:1f.3 SMBus: Intel Corporation 82801DB/DBL/DBM (ICH4/ICH4-L/ICH4-M) \
SMBus Controller (rev 01)
00:1f.5 Multimedia audio controller: Intel Corporation 82801DB/DBL/DBM \
(ICH4/ICH4-L/ICH4-M) AC'97 Audio Controller (rev 01)
01:00.0 VGA compatible controller: Matrox Graphics, Inc. G400/G450 (rev 85)
02:08.0 Ethernet controller: Intel Corporation 82801DB PRO/100 VE (LOM) \
Ethernet Controller (rev 81)
Using -v results in a more detailed listing:
root # lspci -v
[...]
00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet \
Controller (rev 02)
Subsystem: Intel Corporation PRO/1000 MT Desktop Adapter
Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 19
Memory at f0000000 (32-bit, non-prefetchable) [size=128K]
I/O ports at d010 [size=8]
Capabilities: [dc] Power Management version 2
Capabilities: [e4] PCI-X non-bridge device
Kernel driver in use: e1000
Kernel modules: e1000
Information about device name resolution is obtained from the file
/usr/share/pci.ids. PCI IDs not listed in this file
are marked “Unknown device.”
The parameter -vv produces all the information that
could be queried by the program. To view the pure numeric values, use
the parameter -n.
lsusb #
The command lsusb lists all USB devices. With the
option -v, print a more detailed list. The detailed
information is read from the directory
/proc/bus/usb/. The following is the output of
lsusb with these USB devices attached: hub, memory
stick, hard disk and mouse.
root # lsusb
Bus 004 Device 007: ID 0ea0:2168 Ours Technology, Inc. Transcend JetFlash \
2.0 / Astone USB Drive
Bus 004 Device 006: ID 04b4:6830 Cypress Semiconductor Corp. USB-2.0 IDE \
Adapter
Bus 004 Device 005: ID 05e3:0605 Genesys Logic, Inc.
Bus 004 Device 001: ID 0000:0000
Bus 003 Device 001: ID 0000:0000
Bus 002 Device 001: ID 0000:0000
Bus 001 Device 005: ID 046d:c012 Logitech, Inc. Optical Mouse
Bus 001 Device 001: ID 0000:0000tmon
#
tmon is a tool to help visualize, tune, and test the
complex thermal subsystem. When started without parameters,
tmon runs in monitoring mode:
┌──────THERMAL ZONES(SENSORS)──────────────────────────────┐ │Thermal Zones: acpitz00 │ │Trip Points: PC │ └──────────────────────────────────────────────────────────┘ ┌─────────── COOLING DEVICES ──────────────────────────────┐ │ID Cooling Dev Cur Max Thermal Zone Binding │ │00 Processor 0 3 ││││││││││││ │ │01 Processor 0 3 ││││││││││││ │ │02 Processor 0 3 ││││││││││││ │ │03 Processor 0 3 ││││││││││││ │ │04 intel_powerc -1 50 ││││││││││││ │ └──────────────────────────────────────────────────────────┘ ┌──────────────────────────────────────────────────────────┐ │ 10 20 30 40 │ │acpitz 0:[ 8][>>>>>>>>>P9 C31 │ └──────────────────────────────────────────────────────────┘ ┌────────────────── CONTROLS ──────────────────────────────┐ │PID gain: kp=0.36 ki=5.00 kd=0.19 Output 0.00 │ │Target Temp: 65.0C, Zone: 0, Control Device: None │ └──────────────────────────────────────────────────────────┘ Ctrl-c - Quit TAB - Tuning
For detailed information on how to interpret the data, how to log thermal
data and how to use tmon to test and tune cooling
devices and sensors, refer to the man page: man 8
tmon. The package tmon is not installed by
default.
The mcelog package logs and
parses/translates Machine Check Exceptions (MCE) on hardware errors
(also including memory errors). Formerly this has been done by a cron
job executed hourly. Now hardware errors are immediately processed by an
mcelog daemon.
However, the mcelog service is not enabled by default, resulting in memory and CPU errors also not being logged by default. In addition, mcelog has a new feature to also handle predictive bad page offlining and automatic core offlining when cache errors happen.
The service can either be enabled and started via the YaST system services editor or via command line:
root #systemctl enable mcelogroot #systemctl start mcelog
dmidecode shows the machine's DMI table containing
information such as serial numbers and BIOS revisions of the hardware.
root # dmidecode
# dmidecode 2.12
SMBIOS 2.5 present.
27 structures occupying 1298 bytes.
Table at 0x000EB250.
Handle 0x0000, DMI type 4, 35 bytes
Processor Information
Socket Designation: J1PR
Type: Central Processor
Family: Other
Manufacturer: Intel(R) Corporation
ID: E5 06 01 00 FF FB EB BF
Version: Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz
Voltage: 1.1 V
External Clock: 133 MHz
Max Speed: 4000 MHz
Current Speed: 2667 MHz
Status: Populated, Enabled
Upgrade: Other
L1 Cache Handle: 0x0004
L2 Cache Handle: 0x0003
L3 Cache Handle: 0x0001
Serial Number: Not Specified
Asset Tag: Not Specified
Part Number: Not Specified
[..]file #
The command file determines the type of a file or a
list of files by checking /usr/share/misc/magic.
tux > file /usr/bin/file
/usr/bin/file: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), \
for GNU/Linux 2.6.4, dynamically linked (uses shared libs), stripped
The parameter -f LIST
specifies a file with a list of file names to examine. The
-z allows file to look inside
compressed files:
tux >file /usr/share/man/man1/file.1.gz /usr/share/man/man1/file.1.gz: gzip compressed data, from Unix, max compressiontux >file -z /usr/share/man/man1/file.1.gz /usr/share/man/man1/file.1.gz: troff or preprocessor input text \ (gzip compressed data, from Unix, max compression)
The parameter -i outputs a mime type string rather than
the traditional description.
tux > file -i /usr/share/misc/magic
/usr/share/misc/magic: text/plain charset=utf-8mount, df and du #
The command mount shows which file system (device and
type) is mounted at which mount point:
root # mount
/dev/sda2 on / type ext4 (rw,acl,user_xattr)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
debugfs on /sys/kernel/debug type debugfs (rw)
devtmpfs on /dev type devtmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/sda3 on /home type ext3 (rw)
securityfs on /sys/kernel/security type securityfs (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
gvfs-fuse-daemon on /home/tux/.gvfs type fuse.gvfs-fuse-daemon \
(rw,nosuid,nodev,user=tux)
Obtain information about total usage of the file systems with the
command df. The parameter -h (or
--human-readable) transforms the output into a form
understandable for common users.
tux > df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 20G 5,9G 13G 32% /
devtmpfs 1,6G 236K 1,6G 1% /dev
tmpfs 1,6G 668K 1,6G 1% /dev/shm
/dev/sda3 208G 40G 159G 20% /home
Display the total size of all the files in a given directory and its
subdirectories with the command du. The parameter
-s suppresses the output of detailed information and
gives only a total for each argument. -h again
transforms the output into a human-readable form:
tux > du -sh /opt
192M /opt
Read the content of binaries with the readelf
utility. This even works with ELF files that were built for other
hardware architectures:
tux > readelf --file-header /bin/ls
ELF Header:
Magic: 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00
Class: ELF64
Data: 2's complement, little endian
Version: 1 (current)
OS/ABI: UNIX - System V
ABI Version: 0
Type: EXEC (Executable file)
Machine: Advanced Micro Devices X86-64
Version: 0x1
Entry point address: 0x402540
Start of program headers: 64 (bytes into file)
Start of section headers: 95720 (bytes into file)
Flags: 0x0
Size of this header: 64 (bytes)
Size of program headers: 56 (bytes)
Number of program headers: 9
Size of section headers: 64 (bytes)
Number of section headers: 32
Section header string table index: 31stat #
The command stat displays file properties:
tux > stat /etc/profile
File: `/etc/profile'
Size: 9662 Blocks: 24 IO Block: 4096 regular file
Device: 802h/2050d Inode: 132349 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2009-03-20 07:51:17.000000000 +0100
Modify: 2009-01-08 19:21:14.000000000 +0100
Change: 2009-03-18 12:55:31.000000000 +0100
The parameter --file-system produces details of the
properties of the file system in which the specified file is located:
tux > stat /etc/profile --file-system
File: "/etc/profile"
ID: d4fb76e70b4d1746 Namelen: 255 Type: ext2/ext3
Block size: 4096 Fundamental block size: 4096
Blocks: Total: 2581445 Free: 1717327 Available: 1586197
Inodes: Total: 655776 Free: 490312fuser #
It can be useful to determine what processes or users are currently
accessing certain files. Suppose, for example, you want to unmount a
file system mounted at /mnt.
umount returns "device is busy." The command
fuser can then be used to determine what processes
are accessing the device:
tux > fuser -v /mnt/*
USER PID ACCESS COMMAND
/mnt/notes.txt tux 26597 f.... less
Following termination of the less process, which was
running on another terminal, the file system can successfully be
unmounted. When used with -k option,
fuser will terminate processes accessing the file as
well.
w #
With the command w, find out who is logged in to the
system and what each user is doing. For example:
tux > w
16:00:59 up 1 day, 2:41, 3 users, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
tux :0 console Wed13 ?xdm? 8:15 0.03s /usr/lib/gdm/gd
tux console :0 Wed13 26:41m 0.00s 0.03s /usr/lib/gdm/gd
tux pts/0 :0 Wed13 20:11 0.10s 2.89s /usr/lib/gnome-
If any users of other systems have logged in remotely, the parameter
-f shows the computers from which they have established
the connection.
time #
Determine the time spent by commands with the time
utility. This utility is available in two versions: as a Bash built-in
and as a program (/usr/bin/time).
tux > time find . > /dev/null
real 0m4.051s1
user 0m0.042s2
sys 0m0.205s3The real time that elapsed from the command's start-up until it finished. | |
CPU time of the user as reported by the | |
CPU time of the system as reported by the |
The output of /usr/bin/time is much more detailed.
It is recommended to run it with the -v switch to
produce human-readable output.
/usr/bin/time -v find . > /dev/null
Command being timed: "find ."
User time (seconds): 0.24
System time (seconds): 2.08
Percent of CPU this job got: 25%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:09.03
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 2516
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 1564
Voluntary context switches: 36660
Involuntary context switches: 496
Swaps: 0
File system inputs: 0
File system outputs: 0
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0There are a lot of data in the world around you, which can be easily measured in time. For example, changes in the temperature, or the number of data sent or received by your computer's network interface. RRDtool can help you store and visualize such data in detailed and customizable graphs.
RRDtool is available for most Unix platforms and Linux distributions. openSUSE® Leap ships RRDtool as well. Install it either with YaST or by entering
zypper install
rrdtool in the command line as root.
There are Perl, Python, Ruby, and PHP bindings available for RRDtool, so that you can write your own monitoring scripts in your preferred scripting language.
RRDtool is an abbreviation of Round Robin Database tool. Round Robin is a method for manipulating with a constant amount of data. It uses the principle of a circular buffer, where there is no end nor beginning to the data row which is being read. RRDtool uses Round Robin Databases to store and read its data.
As mentioned above, RRDtool is designed to work with data that change in time. The ideal case is a sensor which repeatedly reads measured data (like temperature, speed etc.) in constant periods of time, and then exports them in a given format. Such data are perfectly ready for RRDtool, and it is easy to process them and create the desired output.
Sometimes it is not possible to obtain the data automatically and regularly. Their format needs to be pre-processed before it is supplied to RRDtool, and often you need to manipulate RRDtool even manually.
The following is a simple example of basic RRDtool usage. It illustrates all three important phases of the usual RRDtool workflow: creating a database, updating measured values, and viewing the output.
Suppose we want to collect and view information about the memory usage in the Linux system as it changes in time. To make the example more vivid, we measure the currently free memory over a period of 40 seconds in 4-second intervals. Three applications that usually consume a lot of system memory are started and closed: the Firefox Web browser, the Evolution e-mail client, and the Eclipse development framework.
RRDtool is very often used to measure and visualize network traffic. In such case, the Simple Network Management Protocol (SNMP) is used. This protocol can query network devices for relevant values of their internal counters. Exactly these values are to be stored with RRDtool. For more information on SNMP, see http://www.net-snmp.org/.
Our situation is different—we need to obtain the data
manually. A helper script free_mem.sh repetitively
reads the current state of free memory and writes it to the standard
output.
tux > cat free_mem.sh
INTERVAL=4
for steps in {1..10}
do
DATE=`date +%s`
FREEMEM=`free -b | grep "Mem" | awk '{ print $4 }'`
sleep $INTERVAL
echo "rrdtool update free_mem.rrd $DATE:$FREEMEM"
done
The time interval is set to 4 seconds, and is implemented with the
sleep command.
RRDtool accepts time information in a special format - so called Unix time. It is defined as the number of seconds since the midnight of January 1, 1970 (UTC). For example, 1272907114 represents 2010-05-03 17:18:34.
The free memory information is reported in bytes with
free -b. Prefer to supply basic
units (bytes) instead of multiple units (like kilobytes).
The line with the echo ... command contains the
future name of the database file (free_mem.rrd),
and together creates a command line for updating
RRDtool values.
After running free_mem.sh, you see an output similar
to this:
tux > sh free_mem.sh
rrdtool update free_mem.rrd 1272974835:1182994432
rrdtool update free_mem.rrd 1272974839:1162817536
rrdtool update free_mem.rrd 1272974843:1096269824
rrdtool update free_mem.rrd 1272974847:1034219520
rrdtool update free_mem.rrd 1272974851:909438976
rrdtool update free_mem.rrd 1272974855:832454656
rrdtool update free_mem.rrd 1272974859:829120512
rrdtool update free_mem.rrd 1272974863:1180377088
rrdtool update free_mem.rrd 1272974867:1179369472
rrdtool update free_mem.rrd 1272974871:1181806592It is convenient to redirect the command's output to a file with
sh free_mem.sh > free_mem_updates.log
to simplify its future execution.
Create the initial Robin Round database for our example with the following command:
tux > rrdtool create free_mem.rrd --start 1272974834 --step=4 \
DS:memory:GAUGE:600:U:U RRA:AVERAGE:0.5:1:24
This command creates a file called free_mem.rrd
for storing our measured values in a Round Robin type database.
The --start option specifies the time (in Unix time)
when the first value will be added to the database. In this example,
it is one less than the first time value of the
free_mem.sh output (1272974835).
The --step specifies the time interval in seconds
with which the measured data will be supplied to the database.
The DS:memory:GAUGE:600:U:U part introduces a new
data source for the database. It is called
memory, its type is gauge,
the maximum number between two updates is 600 seconds, and the
minimal and maximal value
in the measured range are unknown (U).
RRA:AVERAGE:0.5:1:24 creates Round Robin archive
(RRA) whose stored data are processed with the
consolidation functions (CF) that calculates the
average of data points. 3 arguments of the
consolidation function are appended to the end of the line.
If no error message is displayed, then
free_mem.rrd database is created in the current
directory:
tux > ls -l free_mem.rrd
-rw-r--r-- 1 tux users 776 May 5 12:50 free_mem.rrd
After the database is created, you need to fill it with the measured
data. In Section 2.11.2.1, “Collecting Data”, we already
prepared the file free_mem_updates.log which
consists of rrdtool update commands. These commands
do the update of database values for us.
tux > sh free_mem_updates.log; ls -l free_mem.rrd
-rw-r--r-- 1 tux users 776 May 5 13:29 free_mem.rrd
As you can see, the size of free_mem.rrd remained
the same even after updating its data.
We have already measured the values, created the database, and stored the measured value in it. Now we can play with the database, and retrieve or view its values.
To retrieve all the values from our database, enter the following on the command line:
tux > rrdtool fetch free_mem.rrd AVERAGE --start 1272974830 \
--end 1272974871
memory
1272974832: nan
1272974836: 1.1729059840e+09
1272974840: 1.1461806080e+09
1272974844: 1.0807572480e+09
1272974848: 1.0030243840e+09
1272974852: 8.9019289600e+08
1272974856: 8.3162112000e+08
1272974860: 9.1693465600e+08
1272974864: 1.1801251840e+09
1272974868: 1.1799787520e+09
1272974872: nan
AVERAGE will fetch average value points from the
database, because only one data source is defined
(Section 2.11.2.2, “Creating the Database”) with
AVERAGE processing and no other function is
available.
The first line of the output prints the name of the data source as defined in Section 2.11.2.2, “Creating the Database”.
The left results column represents individual points in time, while the right one represents corresponding measured average values in scientific notation.
The nan in the last line stands for “not a
number”.
Now a graph representing the values stored in the database is drawn:
tux > rrdtool graph free_mem.png \
--start 1272974830 \
--end 1272974871 \
--step=4 \
DEF:free_memory=free_mem.rrd:memory:AVERAGE \
LINE2:free_memory#FF0000 \
--vertical-label "GB" \
--title "Free System Memory in Time" \
--zoom 1.5 \
--x-grid SECOND:1:SECOND:4:SECOND:10:0:%X
free_mem.png is the file name of the graph to be
created.
--start and --end limit the time
range within which the graph will be drawn.
--step specifies the time resolution (in seconds) of
the graph.
The DEF:... part is a data definition called
free_memory. Its data are read from the
free_mem.rrd database and its data source called
memory. The average value
points are calculated, because no others were defined in
Section 2.11.2.2, “Creating the Database”.
The LINE... part specifies properties of the line
to be drawn into the graph. It is 2 pixels wide, its data come from
the free_memory definition, and its color is
red.
--vertical-label sets the label to be printed along
the y axis, and --title sets
the main label for the whole graph.
--zoom specifies the zoom factor for the graph. This
value must be greater than zero.
--x-grid specifies how to draw grid lines and their
labels into the graph. Our example places them every second, while
major grid lines are placed every 4 seconds. Labels are placed every
10 seconds under the major grid lines.
RRDtool is a very complex tool with a lot of sub-commands and command line options. Some are easy to understand, but to make it produce the results you want and fine-tune them according to your liking may require a lot of effort.
Apart from RRDtool's man page (man 1 rrdtool) which
gives you only basic information, you should have a look at the
RRDtool home
page. There is a detailed
documentation
of the rrdtool command and all its sub-commands.
There are also several
tutorials
to help you understand the common RRDtool workflow.
If you are interested in monitoring network traffic, have a look at MRTG (Multi Router Traffic Grapher). MRTG can graph the activity of many network devices. It can use RRDtool.
System log file analysis is one of the most important tasks when analyzing
the system. In fact, looking at the system log files should be the first
thing to do when maintaining or troubleshooting a system. openSUSE Leap
automatically logs almost everything that happens on the system in detail.
Since the move to systemd, kernel messages and messages of system
services registered with systemd are logged in systemd journal
(see Chapter 11, journalctl: Query the systemd Journal). Other log files (mainly those of
system applications) are written in plain text and can be easily read
using an editor or pager. It is also possible to parse them using scripts.
This allows you to filter their content.
/var/log/ #
System log files are always located under the
/var/log directory. The following list presents an
overview of all system log files from openSUSE Leap present after a
default installation. Depending on your installation scope,
/var/log also contains log files from other services
and applications not listed here. Some files and directories described
below are “placeholders” and are only used, when the
corresponding application is installed. Most log files are only visible
for the user root.
apparmor/
AppArmor log files. For more information about AppArmor, see Part IV, “Confining Privileges with AppArmor”.
audit/
Logs from the audit framework. See Part VI, “The Linux Audit Framework” for details.
ConsoleKit/
Logs of the ConsoleKit daemon
(daemon for tracking what users are logged in and how they interact
with the computer).
cups/
Access and error logs of the Common Unix Printing System
(cups).
firewall
Firewall logs.
gdm/
Log files from the GNOME display manager.
krb5/
Log files from the Kerberos network authentication system.
lastlog
A database containing information on the last login of each user. Use
the command lastlog to view. See man 8
lastlog for more information.
localmessages
Log messages of some boot scripts, for example the log of the DHCP client.
mail*
Mail server (postfix,
sendmail) logs.
messages
This is the default place where all kernel and system log messages go
and should be the first place (along with
/var/log/warn) to look at in case of problems.
NetworkManager
NetworkManager log files.
news/
Log messages from a news server.
chrony/
Logs from the Network Time Protocol daemon
(chrony).
pk_backend_zypp*
PackageKit (with libzypp
back-end) log files.
samba/
Log files from Samba, the Windows SMB/CIFS file server.
warn
Log of all system warnings and errors. This should be the first place
(along with the output of the systemd journal) to look in case of
problems.
wtmp
Database of all login/logout activities,
and remote connections. Use the command last to
view. See man 1 last for more information.
Xorg.0.log
X.Org start-up log file. Refer to this in case you have problems starting X.Org. Copies from previous X.Org starts are numbered Xorg.?.log.
YaST2/
All YaST log files.
zypp/
libzypp log files. Refer to
these files for the package installation history.
zypper.log
Logs from the command line installer zypper.
To view log files, you can use any text editor. There is also a simple YaST module for viewing the system log available in the YaST control center under › .
For viewing log files in a text console, use the commands
less or more. Use
head and tail to view the beginning
or end of a log file. To view entries appended to a log file in real-time
use tail -f. For information about
how to use these tools, see their man pages.
To search for strings or regular expressions in log files use
grep. awk is useful for parsing and
rewriting log files.
logrotate #
Log files under /var/log grow on a daily basis and
quickly become very large. logrotate is a tool that
helps you manage log files and their growth. It allows automatic
rotation, removal, compression, and mailing of log files. Log files can
be handled periodically (daily, weekly, or monthly) or when exceeding a
particular size.
logrotate is usually run daily by systemd,
and thus usually modifies log files only once a day. However, exceptions
occur when a log file is modified because of its size, if
logrotate is run multiple times a day, or if
--force is enabled. Use
/var/lib/misc/logrotate.status to find out when a
particular file was last rotated.
The main configuration file of logrotate is
/etc/logrotate.conf. System packages and
programs that produce log files (for example,
apache2) put their own
configuration files in the /etc/logrotate.d/
directory. The content of /etc/logrotate.d/ is
included via /etc/logrotate.conf.
/etc/logrotate.conf #
# see "man logrotate" for details # rotate log files weekly weekly # keep 4 weeks worth of backlogs rotate 4 # create new (empty) log files after rotating old ones create # use date as a suffix of the rotated file dateext # uncomment this if you want your log files compressed #compress # comment these to switch compression to use gzip or another # compression scheme compresscmd /usr/bin/bzip2 uncompresscmd /usr/bin/bunzip2 # RPM packages drop log rotation information into this directory include /etc/logrotate.d
The create option pays heed to the modes and
ownerships of files specified in /etc/permissions*.
If you modify these settings, make sure no conflicts arise.
logwatch #
logwatch is a customizable, pluggable log-monitoring
script. It parses system logs, extracts the important information and
presents them in a human readable manner. To use
logwatch, install the
logwatch package.
logwatch can either be used at the command line to
generate on-the-fly reports, or via cron to regularly create custom
reports. Reports can either be printed on the screen, saved to a file, or
be mailed to a specified address. The latter is especially useful when
automatically generating reports via cron.
On the command line, you can tell logwatch for which
service and time span to generate a report and how much detail should be
included:
# Detailed report on all kernel messages from yesterday logwatch --service kernel --detail High --range Yesterday --print # Low detail report on all sshd events recorded (incl. archived logs) logwatch --service sshd --detail Low --range All --archives --print # Mail a report on all smartd messages from May 5th to May 7th to root@localhost logwatch --service smartd --range 'between 5/5/2005 and 5/7/2005' \ --mailto root@localhost --print
The --range option has got a complex syntax—see
logwatch --range help for details. A
list of all services that can be queried is available with the following
command:
tux > ls /usr/share/logwatch/default.conf/services/ | sed 's/\.conf//g'
logwatch can be customized to great detail. However,
the default configuration should usually be sufficient. The default
configuration files are located under
/usr/share/logwatch/default.conf/. Never change them
because they would get overwritten again with the next update. Rather
place custom configuration in /etc/logwatch/conf/
(you may use the default configuration file as a template, though). A
detailed HOWTO on customizing logwatch is available at
/usr/share/doc/packages/logwatch/HOWTO-Customize-LogWatch.
The following configuration files exist:
logwatch.conf
The main configuration file. The default version is extensively commented. Each configuration option can be overwritten on the command line.
ignore.conf
Filter for all lines that should globally be ignored by
logwatch.
services/*.conf
The service directory holds configuration files for each service you can generate a report for.
logfiles/*.conf
Specifications on which log files should be parsed for each service.
logger to Make System Log Entries #
logger is a tool for making entries in the system log.
It provides a shell command interface to the rsyslogd system log module.
For example, the following line outputs its message in
/var/log/messages or directly in the journal (if no
logging facility is running):
tux > logger -t Test "This message comes from $USER"Depending on the current user and host name, the log contains a line similar to this:
Sep 28 13:09:31 venus Test: This message comes from tux
SystemTap provides a command line interface and a scripting language to examine the activities of a running Linux system, particularly the kernel, in fine detail. SystemTap scripts are written in the SystemTap scripting language, are then compiled to C-code kernel modules and inserted into the kerne…
Kernel probes are a set of tools to collect Linux kernel debugging and performance information. Developers and system administrators usually use them either to debug the kernel, or to find system performance bottlenecks. The reported data can then be used to tune the system for better performance.
Perf is an interface to access the performance monitoring unit (PMU) of a processor and to record and display software events such as page faults. It supports system-wide, per-thread, and KVM virtualization guest monitoring.
OProfile is a profiler for dynamic program analysis. It investigates the behavior of a running program and gathers information. This information can be viewed and gives hints for further optimization.
It is not necessary to recompile or use wrapper libraries to use OProfile. Not even a kernel patch is needed. Usually, when profiling an application, a small overhead is expected, depending on the workload and sampling frequency.
SystemTap provides a command line interface and a scripting language to
examine the activities of a running Linux system, particularly the kernel,
in fine detail. SystemTap scripts are written in the SystemTap scripting
language, are then compiled to C-code kernel modules and inserted into the
kernel. The scripts can be designed to extract, filter and summarize data,
thus allowing the diagnosis of complex performance problems or functional
problems. SystemTap provides information similar to the output of tools
like netstat, ps,
top, and iostat. However, more
filtering and analysis options can be used for the collected information.
Each time you run a SystemTap script, a SystemTap session is started.
Several passes are done on the script before it is allowed to run.
Then, the script is compiled into a kernel module and loaded. If the
script has been executed before and no system components have changed
(for example, different compiler or kernel versions, library paths, or
script contents), SystemTap does not compile the script again. Instead,
it uses the *.c and *.ko data
stored in the SystemTap cache (~/.systemtap).
The module is unloaded when the tap has finished running. For an example, see the test run in Section 4.2, “Installation and Setup” and the respective explanation.
SystemTap usage is based on SystemTap scripts
(*.stp). They tell SystemTap which type of
information to collect, and what to do once that information is
collected. The scripts are written in the SystemTap scripting language
that is similar to AWK and C. For the language definition, see
http://sourceware.org/systemtap/langref/. A lot of
useful example scripts are available from
http://www.sourceware.org/systemtap/examples/.
The essential idea behind a SystemTap script is to name
events, and to give them handlers.
When SystemTap runs the script, it monitors for certain events. When an
event occurs, the Linux kernel runs the handler as a sub-routine, then
resumes. Thus, events serve as the triggers for handlers to run.
Handlers can record specified data and print it in a certain manner.
The SystemTap language only uses a few data types (integers, strings, and associative arrays of these), and full control structures (blocks, conditionals, loops, functions). It has a lightweight punctuation (semicolons are optional) and does not need detailed declarations (types are inferred and checked automatically).
For more information about SystemTap scripts and their syntax, refer to
Section 4.3, “Script Syntax” and to the
stapprobes and stapfuncs man
pages, that are available with the
systemtap-docs package.
Tapsets are a library of pre-written probes and functions that can be
used in SystemTap scripts. When a user runs a SystemTap script,
SystemTap checks the script's probe events and handlers against the
tapset library. SystemTap then loads the corresponding probes and
functions before translating the script to C. Like SystemTap scripts
themselves, tapsets use the file name extension
*.stp.
However, unlike SystemTap scripts, tapsets are not meant for direct execution. They constitute the library from which other scripts can pull definitions. Thus, the tapset library is an abstraction layer designed to make it easier for users to define events and functions. Tapsets provide aliases for functions that users could want to specify as an event. Knowing the proper alias is often easier than remembering specific kernel functions that might vary between kernel versions.
The main commands associated with SystemTap are stap
and staprun. To execute them, you either need
root privileges or must be a member of the
stapdev or
stapusr group.
stap
SystemTap front-end. Runs a SystemTap script (either from file, or from standard input). It translates the script into C code, compiles it, and loads the resulting kernel module into a running Linux kernel. Then, the requested system trace or probe functions are performed.
staprun
SystemTap back-end. Loads and unloads kernel modules produced by the SystemTap front-end.
For a list of options for each command, use --help. For
details, refer to the stap and the
staprun man pages.
To avoid giving root access to users solely to enable them to work
with SystemTap, use one of the following SystemTap groups. They are not available
by default on openSUSE Leap, but you can create the groups and modify the
access rights accordingly. Also adjust the permissions of the
staprun command if the security implications are
appropriate for your environment.
stapdev
Members of this group can run SystemTap scripts with
stap, or run SystemTap instrumentation modules
with staprun. As running stap
involves compiling scripts into kernel modules and loading them into
the kernel, members of this group still have effective root
access.
stapusr
Members of this group are only allowed to run SystemTap
instrumentation modules with staprun. In addition,
they can only run those modules from
/lib/modules/KERNEL_VERSION/systemtap/.
This directory must be owned by root and must only be
writable for the root user.
The following list gives an overview of the SystemTap main files and directories.
/lib/modules/KERNEL_VERSION/systemtap/
Holds the SystemTap instrumentation modules.
/usr/share/systemtap/tapset/
Holds the standard library of tapsets.
/usr/share/doc/packages/systemtap/examples
Holds several example SystemTap scripts for various purposes.
Only available if the
systemtap-docs package is
installed.
~/.systemtap/cache
Data directory for cached SystemTap files.
/tmp/stap*
Temporary directory for SystemTap files, including translated C code and kernel object.
As SystemTap needs information about the kernel, some additional
kernel-related packages must be installed. For each kernel you want to
probe with SystemTap, you need to install a set of the following
packages. This set should exactly match the kernel version and flavor
(indicated by * in the overview below).
If you subscribed your system for online updates, you can find
“debuginfo” packages in the
*-Debuginfo-Updates online installation repository
relevant for openSUSE Leap 42.3. Use YaST to
enable the repository.
For the classic SystemTap setup, install the following packages (using
either YaST or zypper).
systemtap
systemtap-server
systemtap-docs (optional)
kernel-*-base
kernel-*-debuginfo
kernel-*-devel
kernel-source-*
gcc
To get access to the man pages and to a helpful collection of example
SystemTap scripts for various purposes, additionally install the
systemtap-docs package.
To check if all packages are correctly installed on the machine and if
SystemTap is ready to use, execute the following command as
root.
root # stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}'It probes the currently used kernel by running a script and returning an output. If the output is similar to the following, SystemTap is successfully deployed and ready to use:
Pass 1: parsed user script and 59 library script(s) in 80usr/0sys/214real ms. Pass 2: analyzed script: 1 probe(s), 11 function(s), 2 embed(s), 1 global(s) in 140usr/20sys/412real ms. Pass 3: translated to C into "/tmp/stapDwEk76/stap_1856e21ea1c246da85ad8c66b4338349_4970.c" in 160usr/0sys/408real ms. Pass 4: compiled C into "stap_1856e21ea1c246da85ad8c66b4338349_4970.ko" in 2030usr/360sys/10182real ms. Pass 5: starting run. read performed Pass 5: run completed in 10usr/20sys/257real ms.
Checks the script against the existing tapset library in
| |
Examines the script for its components. | |
Translates the script to C. Runs the system C compiler to create a
kernel module from it. Both the resulting C code
( | |
Loads the module and enables all the probes (events and handlers) in
the script by hooking into the kernel. The event being probed is a
Virtual File System (VFS) read. As the event occurs on any processor, a
valid handler is executed (prints the text | |
After the SystemTap session is terminated, the probes are disabled, and the kernel module is unloaded. |
In case any error messages appear during the test, check the output for hints about any missing packages and make sure they are installed correctly. Rebooting and loading the appropriate kernel may also be needed.
SystemTap scripts consist of the following two components:
Name the kernel events at the associated handler should be executed. Examples for events are entering or exiting a certain function, a timer expiring, or starting or terminating a session.
Series of script language statements that specify the work to be done whenever a certain event occurs. This normally includes extracting data from the event context, storing them into internal variables, or printing results.
An event and its corresponding handler is collectively called a
probe. SystemTap events are also called probe
points. A probe's handler is also called a probe
body.
Comments can be inserted anywhere in the SystemTap script in various
styles: using either #, /* */, or
// as marker.
A SystemTap script can have multiple probes. They must be written in the following format:
probe EVENT {STATEMENTS}
Each probe has a corresponding statement block. This statement block
must be enclosed in { } and contains the statements
to be executed per event.
The following example shows a simple SystemTap script.
probe1 begin2 {3 printf4 ("hello world\n")5 exit ()6 }7
Start of the probe. | |
Event | |
Start of the handler definition, indicated by | |
First function defined in the handler: the | |
String to be printed by the | |
Second function defined in the handler: the | |
End of the handler definition, indicated by |
The event begin
2
(the start of the SystemTap session) triggers the handler enclosed in
{ }. Here, that is the printf
function
4.
In this case, it prints hello world followed by a
new line
5.
Then, the script exits.
If your statement block holds several statements, SystemTap executes these statements in sequence—you do not need to insert special separators or terminators between multiple statements. A statement block can also be nested within another statement blocks. Generally, statement blocks in SystemTap scripts use the same syntax and semantics as in the C programming language.
SystemTap supports several built-in events.
The general event syntax is a dotted-symbol sequence. This allows a
breakdown of the event namespace into parts. Each component identifier
may be parameterized by a string or number literal, with a syntax like a
function call. A component may include a * character,
to expand to other matching probe points. A probe point may be followed
by a ? character, to indicate that it is optional,
and that no error should result if it fails to expand.
Alternately, a probe point may be followed by a !
character to indicate that it is both optional and sufficient.
SystemTap supports multiple events per probe—they need to be
separated by a comma (,). If multiple events are
specified in a single probe, SystemTap will execute the handler when any
of the specified events occur.
In general, events can be classified into the following categories:
Synchronous events: Occur when any process executes an instruction at a particular location in kernel code. This gives other events a reference point (instruction address) from which more contextual data may be available.
An example for a synchronous event is
vfs.FILE_OPERATION: The
entry to the FILE_OPERATION event for
Virtual File System (VFS). For example, in
Section 4.2, “Installation and Setup”, read
is the FILE_OPERATION event used for VFS.
Asynchronous events: Not tied to a particular instruction or location in code. This family of probe points consists mainly of counters, timers, and similar constructs.
Examples for asynchronous events are: begin (start
of a SystemTap session—when a SystemTap script is run,
end (end of a SystemTap session), or timer events.
Timer events specify a handler to be executed periodically, like
example
timer.s(SECONDS), or
timer.ms(MILLISECONDS).
When used together with other probes that collect information, timer events allow you to print periodic updates and see how that information changes over time.
For example, the following probe would print the text “hello world” every 4 seconds:
probe timer.s(4)
{
printf("hello world\n")
}
For detailed information about supported events, refer to the
stapprobes man page. The See
Also section of the man page also contains links to other
man pages that discuss supported events for specific subsystems and
components.
Each SystemTap event is accompanied by a corresponding handler defined for that event, consisting of a statement block.
If you need the same set of statements in multiple probes, you can
place them in a function for easy reuse. Functions are defined by the
keyword function followed by a name. They take any
number of string or numeric arguments (by value) and may return a
single string or number.
function FUNCTION_NAME(ARGUMENTS) {STATEMENTS}
probe EVENT {FUNCTION_NAME(ARGUMENTS)}The statements in FUNCTION_NAME are executed when the probe for EVENT executes. The ARGUMENTS are optional values passed into the function.
Functions can be defined anywhere in the script. They may take any
One of the functions needed very often was already introduced in
Example 4.1, “Simple SystemTap Script”: the printf
function for printing data in a formatted way. When using the
printf function, you can specify how arguments
should be printed by using a format string. The format string is
included in quotation marks and can contain further format specifiers,
introduced by a % character.
Which format strings to use depends on your list of arguments. Format strings can have multiple format specifiers—each matching a corresponding argument. Multiple arguments can be separated by a comma.
printf Function with Format Specifiers #
The example above prints the current executable name
(execname()) as a string and the process ID
(pid()) as an integer in brackets. Then, a space,
the word open and a line break follow:
[...] vmware-guestd(2206) open hald(2360) open [...]
Apart from the two functions execname()and
pid()) used in
Example 4.3, “printf Function with Format Specifiers”, a variety of other
functions can be used as printf arguments.
Among the most commonly used SystemTap functions are the following:
ID of the current thread.
Process ID of the current thread.
ID of the current user.
Current CPU number.
Name of the current process.
Number of seconds since Unix epoch (January 1, 1970).
Convert time into a string.
String describing the probe point currently being handled.
Useful function for organizing print results. It (internally) stores
an indentation counter for each thread (tid()).
The function takes one argument, an indentation delta, indicating
how many spaces to add or remove from the thread's indentation
counter. It returns a string with some generic trace data along with
an appropriate number of indentation spaces. The generic data
returned includes a time stamp (number of microseconds since the
initial indentation for the thread), a process name, and the thread
ID itself. This allows you to identify what functions were called,
who called them, and how long they took.
Call entries and exits often do not immediately precede each other
(otherwise it would be easy to match them). In between a first call
entry and its exit, usually other call entries and exits
are made. The indentation counter helps you match an entry with its
corresponding exit as it indents the next function call in case it
is not the exit of the previous one. For an
example SystemTap script using thread_indent()
and the respective output, refer to the SystemTap
Tutorial:
http://sourceware.org/systemtap/tutorial/Tracing.html#fig:socket-trace.
For more information about supported SystemTap functions, refer to the
stapfuncs man page.
Apart from functions, you can use other common constructs in
SystemTap handlers, including variables, conditional statements (like
if/else, while
loops, for loops, arrays or command line arguments.
Variables may be defined anywhere in the script. To define one, simply choose a name and assign a value from a function or expression to it:
foo = gettimeofday( )
Then you can use the variable in an expression. From the type of
values assigned to the variable, SystemTap automatically infers the
type of each identifier (string or number). Any inconsistencies will
be reported as errors. In the example above, foo
would automatically be classified as a number and could be printed via
printf() with the integer format specifier
(%d).
However, by default, variables are local to the probe they are used
in: They are initialized, used and disposed of at each handler
evocation. To share variables between probes, declare them global
anywhere in the script. To do so, use the global
keyword outside of the probes:
global count_jiffies, count_ms
probe timer.jiffies(100) { count_jiffies ++ }
probe timer.ms(100) { count_ms ++ }
probe timer.ms(12345)
{
hz=(1000*count_jiffies) / count_ms
printf ("jiffies:ms ratio %d:%d => CONFIG_HZ=%d\n",
count_jiffies, count_ms, hz)
exit ()
}
This example script computes the CONFIG_HZ setting of the kernel by
using timers that count jiffies and milliseconds, then computing
accordingly. (A jiffy is the duration of one tick of the system timer
interrupt. It is not an absolute time interval unit, since its
duration depends on the clock interrupt frequency of the particular
hardware platform). With the global statement it
is possible to use the variables count_jiffies and
count_ms also in the probe
timer.ms(12345). With ++ the
value of a variable is incremented by 1.
There are several conditional statements that you can use in SystemTap scripts. The following are probably the most common:
They are expressed in the following format:
if (CONDITION)1STATEMENT12 else3STATEMENT24
The if statement compares an integer-valued
expression to zero. If the condition expression
1
is non-zero, the first statement
2
is executed. If the condition expression is zero, the second
statement
4
is executed. The else clause
(3
and
4)
is optional. Both
2
and
4
can also be statement blocks.
They are expressed in the following format:
while (CONDITION)1STATEMENT2
As long as condition is non-zero, the statement
2
is executed.
2
can also be a statement block. It must change a value so
condition will eventually be zero.
They are a shortcut for while loops and are
expressed in the following format:
for (INITIALIZATION1; CONDITIONAL2; INCREMENT3) statement
The expression specified in 1 is used to initialize a counter for the number of loop iterations and is executed before execution of the loop starts. The execution of the loop continues until the loop condition 2 is false. (This expression is checked at the beginning of each loop iteration). The expression specified in 3 is used to increment the loop counter. It is executed at the end of each loop iteration.
The following operators can be used in conditional statements:
==: Is equal to
!=: Is not equal to
>=: Is greater than or equal to
<=: Is less than or equal to
If you have installed the
systemtap-docs package, you can
find several useful SystemTap example scripts in
/usr/share/doc/packages/systemtap/examples.
This section describes a rather simple example script in more detail:
/usr/share/doc/packages/systemtap/examples/network/tcp_connections.stp.
tcp_connections.stp ##! /usr/bin/env stap
probe begin {
printf("%6s %16s %6s %6s %16s\n",
"UID", "CMD", "PID", "PORT", "IP_SOURCE")
}
probe kernel.function("tcp_accept").return?,
kernel.function("inet_csk_accept").return? {
sock = $return
if (sock != 0)
printf("%6d %16s %6d %6d %16s\n", uid(), execname(), pid(),
inet_get_local_port(sock), inet_get_ip_source(sock))
}This SystemTap script monitors the incoming TCP connections and helps to identify unauthorized or unwanted network access requests in real time. It shows the following information for each new incoming TCP connection accepted by the computer:
User ID (UID)
Command accepting the connection (CMD)
Process ID of the command (PID)
Port used by the connection (PORT)
IP address from which the TCP connection originated
(IP_SOUCE)
To run the script, execute
stap /usr/share/doc/packages/systemtap/examples/network/tcp_connections.stp
and follow the output on the screen. To manually stop the script, press Ctrl–C.
For debugging user space applications (like DTrace can do), openSUSE Leap 42.3 supports user space probing with SystemTap: Custom probe points can be inserted in any user space application. Thus, SystemTap lets you use both kernel space and user space probes to debug the behavior of the whole system.
To get the required utrace infrastructure and the uprobes kernel module
for user space probing, you need to install the
kernel-trace package in
addition to the packages listed in
Section 4.2, “Installation and Setup”.
utrace implements a framework for controlling
user space tasks. It provides an interface that can be used by various
tracing “engines”, implemented as loadable kernel modules.
The engines register callback functions for specific events, then attach
to whichever thread they want to trace. As the callbacks are made from
“safe” places in the kernel, this allows for great leeway in
the kinds of processing the functions can do. Various events can be
watched via utrace, for example, system call entry and exit, fork(),
signals being sent to the task, etc. More details about the utrace
infrastructure are available at
http://sourceware.org/systemtap/wiki/utrace.
SystemTap includes support for probing the entry into and return from a function in user space processes, probing predefined markers in user space code, and monitoring user-process events.
To check if the currently running kernel provides the needed utrace support, use the following command:
tux >sudogrep CONFIG_UTRACE /boot/config-`uname -r`
For more details about user space probing, refer to https://sourceware.org/systemtap/SystemTap_Beginners_Guide/userspace-probing.html.
This chapter only provides a short SystemTap overview. Refer to the following links for more information about SystemTap:
SystemTap project home page.
Huge collection of useful information about SystemTap, ranging from detailed user and developer documentation to reviews and comparisons with other tools, or Frequently Asked Questions and tips. Also contains collections of SystemTap scripts, examples and usage stories and lists recent talks and papers about SystemTap.
Features a SystemTap Tutorial, a SystemTap Beginner's Guide, a Tapset Developer's Guide, and a SystemTap Language Reference in PDF and HTML format. Also lists the relevant man pages.
You can also find the SystemTap language reference and SystemTap tutorial
in your installed system under
/usr/share/doc/packages/systemtap. Example SystemTap
scripts are available from the example subdirectory.
Kernel probes are a set of tools to collect Linux kernel debugging and performance information. Developers and system administrators usually use them either to debug the kernel, or to find system performance bottlenecks. The reported data can then be used to tune the system for better performance.
You can insert these probes into any kernel routine, and specify a handler to be invoked after a particular break-point is hit. The main advantage of kernel probes is that you no longer need to rebuild the kernel and reboot the system after you make changes in a probe.
To use kernel probes, you typically need to write or obtain a specific
kernel module. Such modules include both the init and
the exit function. The init function (such as
register_kprobe()) registers one or more probes,
while the exit function unregisters them. The registration function
defines where the probe will be inserted and
which handler will be called after the probe is hit.
To register or unregister a group of probes at one time, you can use
relevant
register_<PROBE_TYPE>probes()
or
unregister_<PROBE_TYPE>probes()
functions.
Debugging and status messages are typically reported with the
printk kernel routine.
printk is a kernel space equivalent of a
user space printf routine. For more information
on printk, see
Logging
kernel messages. Normally, you can view these messages by
inspecting the output of the systemd journal (see
Chapter 11, journalctl: Query the systemd Journal). For more information on log files, see
Chapter 3, Analyzing and Managing System Log Files.
Kernel probes are fully implemented on the following architectures:
x86
AMD64/Intel 64
ARM
POWER
Kernel probes are partially implemented on the following architectures:
IA64 (does not support probes on instruction
slot1)
sparc64 (return probes not yet implemented)
There are three types of kernel probes: Kprobes,
Jprobes, and Kretprobes.
Kretprobes are sometimes called return
probes. You can find source code examples of all three type of
probes in the Linux kernel. See the directory
/usr/src/linux/samples/kprobes/ (package
kernel-source).
Kprobes can be attached to any instruction in the Linux kernel. When Kprobes is registered, it inserts a break-point at the first byte of the probed instruction. When the processor hits this break-point, the processor registers are saved, and the processing passes to Kprobes. First, a pre-handler is executed, then the probed instruction is stepped, and, finally a post-handler is executed. The control is then passed to the instruction following the probe point.
Jprobes is implemented through the Kprobes mechanism. It is
inserted on a function's entry point and allows direct access to the
arguments of the function which is being probed. Its handler routine
must have the same argument list and return value as the probed
function. To end it, call the jprobe_return()
function.
When a jprobe is hit, the processor registers are saved, and the
instruction pointer is directed to the jprobe handler routine. The
control
then passes to the handler with the same register contents as the
function being probed. Finally, the handler calls the
jprobe_return() function, and switches the
control back to the control function.
In general, you can insert multiple probes on one function. Jprobe is, however, limited to only one instance per function.
Return probes are also implemented through Kprobes. When the
register_kretprobe() function is called, a
kprobe is attached to the entry of the probed function.
After hitting the probe, the kernel probes mechanism saves the probed
function return address and calls a user-defined return handler. The
control is then passed back to the probed function.
Before you call register_kretprobe(), you need
to set a maxactive argument, which specifies
how many instances of the function can be probed at the same time. If
set too low, you will miss a certain number of probes.
The programming interface of Kprobes consists of functions which are used to register and unregister all used kernel probes, and associated probe handlers. For a more detailed description of these functions and their arguments, see the information sources in Section 5.5, “For More Information”.
register_kprobe()
Inserts a break-point on a specified address. When the break-point is
hit, the pre_handler and
post_handler are called.
register_jprobe()
Inserts a break-point in the specified address. The address needs to be the address of the first instruction of the probed function. When the break-point is hit, the specified handler is run. The handler should have the same argument list and return type as the probed.
register_kretprobe()
Inserts a return probe for the specified function. When the probed function returns, a specified handler is run. This function returns 0 on success, or a negative error number on failure.
unregister_kprobe(), unregister_jprobe(), unregister_kretprobe()
Removes the specified probe. You can use it any time after the probe has been registered.
register_kprobes(), register_jprobes(), register_kretprobes()
Inserts each of the probes in the specified array.
unregister_kprobes(), unregister_jprobes(), unregister_kretprobes()
Removes each of the probes in the specified array.
disable_kprobe(), disable_jprobe(), disable_kretprobe()
Disables the specified probe temporarily.
enable_kprobe(), enable_jprobe(), enable_kretprobe()
Temporarily enables disabled probes.
debugfs Interface #
In recent Linux kernels, the Kprobes instrumentation uses the
kernel's debugfs interface. It can list all
registered probes and globally switch all probes on or off.
The list of all currently registered probes is in the
/sys/kernel/debug/kprobes/list file.
saturn.example.com:~ # cat /sys/kernel/debug/kprobes/list c015d71a k vfs_read+0x0 [DISABLED] c011a316 j do_fork+0x0 c03dedc5 r tcp_v4_rcv+0x0
The first column lists the address in the kernel where the probe is
inserted. The second column prints the type of the probe:
k for kprobe, j for jprobe, and
r for return probe. The third column specifies the
symbol, offset and optional module name of the probe. The following
optional columns include the status information of the probe. If the
probe is inserted on a virtual address which is not valid anymore, it is
marked with [GONE]. If the probe is temporarily
disabled, it is marked with [DISABLED].
The /sys/kernel/debug/kprobes/enabled file
represents a switch with which you can globally and forcibly turn on or
off all the registered kernel probes. To turn them off, simply enter
root # echo "0" > /sys/kernel/debug/kprobes/enabled
on the command line as root. To turn them on again, enter
root # echo "1" > /sys/kernel/debug/kprobes/enabled
Note that this way you do not change the status of the probes. If a
probe is temporarily disabled, it will not be enabled automatically but
will remain in the [DISABLED] state after entering
the latter command.
To learn more about kernel probes, look at the following sources of information:
Thorough but more technically oriented information about kernel probes
is in /usr/src/linux/Documentation/kprobes.txt
(package kenrel-source).
Examples of all three types of probes (together with related
Makefile) are in the
/usr/src/linux/samples/kprobes/ directory (package
kenrel-source).
In-depth information about Linux kernel modules and
printk kernel routine is in
The
Linux Kernel Module Programming Guide
Practical but slightly outdated information about the use of kernel probes can be found in Kernel debugging with Kprobes
Perf is an interface to access the performance monitoring unit (PMU) of a processor and to record and display software events such as page faults. It supports system-wide, per-thread, and KVM virtualization guest monitoring.
You can store resulting information in a report. This report contains information about, for example, instruction pointers or what code a thread was executing.
Perf consists of two parts:
Code integrated into the Linux kernel that is responsible for instructing the hardware.
The perf user space utility that allows you to use the
kernel code and helps you analyze gathered data.
Performance monitoring means collecting information related to how an application or system performs. This information can be obtained either through software-based means or from the CPU or chipset. Perf integrates both of these methods.
Many modern processors contain a performance monitoring unit (PMU). The design and functionality of a PMU is CPU-specific. For example, the number of registers, counters and features supported will vary by CPU implementation.
Each PMU model consists of a set of registers: the performance monitor configuration (PMC) and the performance monitor data (PMD). Both can be read, but only PMCs are writable. These registers store configuration information and data.
Perf supports several profiling modes:
Counting. Count the number of occurrences of an event.
Event-Based Sampling. A less exact way of counting: A sample is recorded whenever a certain threshold number of events has occurred.
Time-Based Sampling. A less exact way of counting: A sample is recorded in a defined frequency.
Instruction-Based Sampling (AMD64 only). The processor follows instructions appearing in a given interval and samples which events they produce. This allows following up on individual instructions and seeing which of them is critical to performance.
The Perf kernel code is already included with the default kernel. To be able to use the user space utility, install the package perf.
To gather the required information, the perf tool has
several subcommands. This section gives an overview of the most often used
commands.
To see help in the form of a man page for any of the subcommands, use either
perf helpSUBCOMMAND
or
man perf-SUBCOMMAND.
perf stat
Start a program and create a statistical overview that is displayed after
the program quits.
perf stat is used to count events.
perf record
Start a program and create a report with performance counter information.
The report is stored as perf.data in the current
directory.
perf record is used to sample events.
perf report
Display a report that was previously created with
perf record.
perf annotateDisplay a report file and an annotated version of the executed code. If debug symbols are installed, you will also see the source code displayed.
perf list
List event types that Perf can report with the current kernel and with
your CPU.
You can filter event types by category—for example, to see hardware
events only, use perf list hw.
The man page for perf_event_open has short descriptions
for the most important events.
For example, to find a description of the event
branch-misses, search for
BRANCH_MISSES (note the spelling differences):
tux >manperf_event_open |grep-A5 BRANCH_MISSES
Sometimes, events may be ambiguous. Note that the lowercase hardware event names are not the name of raw hardware events but instead the name of aliases created by Perf. These aliases map to differently named but similarly defined hardware events on each supported processor.
For example, the cpu-cycles event is mapped to
the hardware event UNHALTED_CORE_CYCLES on
Intel processors.
On AMD processors, however, it is mapped to the hardware event
CPU_CLK_UNHALTED.
Perf also allows measuring raw events specific to your hardware. To look up their descriptions, see the Architecture Software Developer's Manual of your CPU vendor. The relevant documents for AMD64/Intel 64 processors are linked to in Section 6.7, “For More Information”.
perf topDisplay system activity as it happens.
perf trace
This command behaves similarly to strace.
With this subcommand, you can see which system calls are executed by a
particular thread or process and which signals it receives.
To count the number of occurrences of an event, such as those displayed by
perf list, use:
root #perfstat -e EVENT -a
To count multiple types of events at once, list them separated by commas.
For example, to count cpu-cycles and
instructions, use:
root #perfstat -e cpu-cycles,instructions -a
To stop the session, press Ctrl–C.
You can also count the number of occurrences of an event within a particular time:
root #perfstat -e EVENT -a -- sleep TIME
Replace TIME by a value in seconds.
There are various ways to sample events specific to a particular command:
To create a report for a newly invoked command, use:
root #perfrecord COMMAND
Then, use the started process normally. When you quit the process, the Perf session will also stop.
To create a report for the entire system while a newly invoked command is running, use:
root #perfrecord -a COMMAND
Then, use the started process normally. When you quit the process, the Perf session will also stop.
To create a report for an already running process, use:
root #perfrecord -p PID
Replace PID with a process ID. To stop the session, press Ctrl–C.
Now you can view the gathered data (perf.data)
using:
tux >perfreport
This will open a pseudo-graphical interface. To receive help, press H. To quit, press Q.
If you prefer a graphical interface, try the GTK+ interface of Perf:
tux >perfreport --gtk
However, note that the GTK+ interface is very limited in functionality.
This chapter only provides a short overview. Refer to the following links for more information:
The project home page.
It also features a tutorial on using perf.
Unofficial page with many one-line examples of how to use
perf.
Unofficial page with several resources, mostly relating to the Linux kernel code of Perf and its API. This page includes, for example, a CPU compatibility table and a programming guide.
The Intel Architectures Software Developer's Manual, Volume 3B.
The AMD Architecture Programmer's Manual, Volume 2.
Consult this chapter for other performance optimizations.
OProfile is a profiler for dynamic program analysis. It investigates the behavior of a running program and gathers information. This information can be viewed and gives hints for further optimization.
It is not necessary to recompile or use wrapper libraries to use OProfile. Not even a kernel patch is needed. Usually, when profiling an application, a small overhead is expected, depending on the workload and sampling frequency.
OProfile consists of a kernel driver and a daemon for collecting data. It uses the hardware performance counters provided on many processors. OProfile is capable of profiling all code including the kernel, kernel modules, kernel interrupt handlers, system shared libraries, and other applications.
Modern processors support profiling through the hardware by performance counters. Depending on the processor, there can be many counters and each of these can be programmed with an event to count. Each counter has a value which determines how often a sample is taken. The lower the value, the more often it is used.
During the post-processing step, all information is collected and instruction addresses are mapped to a function name.
To use OProfile, install the oprofile package.
It is useful to install the *-debuginfo package for
the respective application you want to profile. If you want to profile
the kernel, you need the debuginfo package as well.
OProfile contains several utilities to handle the profiling process and its profiled data. The following list is a short summary of programs used in this chapter:
opannotate
Outputs annotated source or assembly listings mixed with profile
information. An annotated report can be used in combination with
addr2line to identify the source file and line
where hotspots potentially exist. See man addr2line
for more information.
opcontrol
Controls the profiling sessions (start or stop), dumps profile data, and sets up parameters.
ophelp
Lists available events with short descriptions.
opimport
Converts sample database files from a foreign binary format to the native format.
opreport
Generates reports from profiled data.
With OProfile, you can profile both the kernel and applications. When
profiling the kernel, tell OProfile where to find the
vmlinuz* file. Use the --vmlinux
option and point it to vmlinuz* (usually in
/boot). If you need to profile kernel modules,
OProfile does this by default. However, make sure you read
http://oprofile.sourceforge.net/doc/kernel-profiling.html.
Applications usually do not need to profile the kernel, therefore you
should use the --no-vmlinux option to reduce the amount
of information.
Starting the daemon, collecting data, stopping the daemon, and creating a report.
Open a shell and log in as root.
Decide if you want to profile with or without the Linux kernel:
Profile With the Linux Kernel.
Execute the following commands, because
opcontrol can only work with uncompressed
images:
tux >cp /boot/vmlinux-`uname -r`.gz /tmptux >gunzip /tmp/vmlinux*.gztux >opcontrol --vmlinux=/tmp/vmlinux*
Profile Without the Linux Kernel. Use the following command:
root # opcontrol --no-vmlinux
To see which functions call other functions in the
output, additionally use the --callgraph option and
set a maximum DEPTH:
root # opcontrol --no-vmlinux --callgraph DEPTHStart the OProfile daemon:
root # opcontrol --start
Using 2.6+ OProfile kernel interface.
Using log file /var/lib/oprofile/samples/oprofiled.log
Daemon started.
Profiler running.Now start the application you want to profile.
Stop the OProfile daemon:
root # opcontrol --stop
Dump the collected data to
/var/lib/oprofile/samples:
root # opcontrol --dumpCreate a report:
root # opreport
Overflow stats not available
CPU: CPU with timer interrupt, speed 0 MHz (estimated)
Profiling through timer interrupt
TIMER:0|
samples| %|
------------------
84877 98.3226 no-vmlinux
...
Shut down the oprofile daemon:
root # opcontrol --shutdownThe general procedure for event configuration is as follows:
Use first the events CPU-CLK_UNHALTED and
INST_RETIRED to find optimization opportunities.
Use specific events to find bottlenecks. To list them, use the command
opcontrol --list-events.
If you need to profile certain events, first check the available events
supported by your processor with the ophelp command
(example output generated from Intel Core i5 CPU):
root #ophelpoprofile: available events for CPU type "Intel Architectural Perfmon" See Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3B (Document 253669) Chapter 18 for architectural perfmon events This is a limited set of fallback events because oprofile does not know your CPU CPU_CLK_UNHALTED: (counter: all)) Clock cycles when not halted (min count: 6000) INST_RETIRED: (counter: all)) number of instructions retired (min count: 6000) LLC_MISSES: (counter: all)) Last level cache demand requests from this core that missed the LLC (min count: 6000) Unit masks (default 0x41) ---------- 0x41: No unit mask LLC_REFS: (counter: all)) Last level cache demand requests from this core (min count: 6000) Unit masks (default 0x4f) ---------- 0x4f: No unit mask BR_MISS_PRED_RETIRED: (counter: all)) number of mispredicted branches retired (precise) (min count: 500)
You can get the same output from opcontrol
--list-events.
Specify the performance counter events with the option
--event. Multiple options are possible. This option
needs an event name (from ophelp) and a sample rate,
for example:
root # opcontrol --event=CPU_CLK_UNHALTED:100000CPU_CLK_UNHALTEDSetting low sampling rates can seriously impair the system performance while high sample rates can disrupt the system to such a high degree that the data is useless. It is recommended to tune the performance metric for being monitored with and without OProfile and to experimentally determine the minimum sample rate that disrupts the performance the least.
The GUI for OProfile can be started as root with
oprof_start, see
Figure 7.1, “GUI for OProfile”. Select your events and change the
counter, if necessary. Every green line is added to the list of checked
events. Hover the mouse over the line to see a help text in the status
line below. Use the tab to set the
buffer and CPU size, the verbose option and others. Click
to execute OProfile.
Before generating a report, make sure OProfile has dumped your data to
the /var/lib/oprofile/samples directory using the
command opcontrol --dump. A report
can be generated with the commands opreport or
opannotate.
Calling opreport without any options gives a complete
summary. With an executable as an argument, retrieve profile data only
from this executable. If you analyze applications written in C++, use the
--demangle smart option.
The opannotate generates output with annotations from
source code. Run it with the following options:
root #opannotate--source \ --base-dirs=BASEDIR \ --search-dirs= \ --output-dir=annotated/ \ /lib/libfoo.so
The option --base-dir contains a comma separated list of
paths which is stripped from debug source files. These paths were
searched prior to looking in --search-dirs. The
--search-dirs option is also a comma separated list of
directories to search for source files.
Because of compiler optimization, code can disappear and appear in a different place. Use the information in http://oprofile.sourceforge.net/doc/debug-info.html to fully understand its implications.
This chapter only provides a short overview. Refer to the following links for more information:
The project home page.
Details descriptions about the options of the different tools.
/usr/share/doc/packages/oprofile/oprofile.html
Contains the OProfile manual.
Architecture reference for Intel processors.
Architecture reference for PowerPC64 processors in IBM iSeries, pSeries, and Blade server systems.
Tuning the system is not only about optimizing the kernel or getting the most out of your application, it begins with setting up a lean and fast system. The way you set up your partitions and file systems can influence the server's speed. The number of active services and the way routine tasks are scheduled also affects performance.
Kernel Control Groups (abbreviated known as “cgroups”) are a kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchical organized groups. These hierarchical groups can be configured to show a specialized behavior that helps with tuning the system to make best use of available hardware and network resources.
In the following sections, we often reference kernel documentation
such as /usr/src/linux/Documentation/cgroups/.
These files are part of the kernel-source
package.
This chapter is an overview. To use cgroups properly and to avoid performance implications, you must study the provided references.
There are physical limitations to hardware that are encountered when many CPU and lots of memory are required. In this chapter, the important limitation is that there is limited communication bandwidth between the CPUs and the memory. One architecture modification that was introduced to address this is Non-Uniform Memory Access (NUMA).
In this configuration, there are multiple nodes. Each of the nodes contains a subset of all CPUs and memory. The access speed to main memory is determined by the location of the memory relative to the CPU. The performance of a workload depends on the application threads accessing data that is local to the CPU the thread is executing on. Automatic NUMA Balancing is a new feature of SLE 12. Automatic NUMA Balancing migrates data on demand to memory nodes that are local to the CPU accessing that data. Depending on the workload, this can dramatically boost performance when using NUMA hardware.
Power management aims at reducing operating costs for energy and cooling systems while at the same time keeping the performance of a system at a level that matches the current requirements. Thus, power management is always a matter of balancing the actual performance needs and power saving options for a system. Power management can be implemented and used at different levels of the system. A set of specifications for power management functions of devices and the operating system interface to them has been defined in the Advanced Configuration and Power Interface (ACPI). As power savings in server environments can primarily be achieved at the processor level, this chapter introduces some main concepts and highlights some tools for analyzing and influencing relevant parameters.
Tuning the system is not only about optimizing the kernel or getting the most out of your application, it begins with setting up a lean and fast system. The way you set up your partitions and file systems can influence the server's speed. The number of active services and the way routine tasks are scheduled also affects performance.
A carefully planned installation ensures that the system is set up exactly as you need it for the given purpose. It also saves considerable time when fine tuning the system. All changes suggested in this section can be made in the step during the installation. See Section 3.10, “Installation Settings” for details.
Depending on the server's range of applications and the hardware layout, the partitioning scheme can influence the machine's performance (although to a lesser extent only). It is beyond the scope of this manual to suggest different partitioning schemes for particular workloads. However, the following rules will positively affect performance. They do not apply when using an external storage system.
Make sure there always is some free space available on the disk, since a full disk delivers inferior performance
Disperse simultaneous read and write access onto different disks by, for example:
using separate disks for the operating system, data, and log files
placing a mail server's spool directory on a separate disk
distributing the user directories of a home server between different disks
The installation scope has no direct influence on the machine's performance, but a carefully chosen scope of packages has advantages. It is recommended to install the minimum of packages needed to run the server. A system with a minimum set of packages is easier to maintain and has fewer potential security issues. Furthermore, a tailor made installation scope also ensures that no unnecessary services are started by default.
openSUSE Leap lets you customize the installation scope on the Installation Summary screen. By default, you can select or remove preconfigured patterns for specific tasks, but it is also possible to start the YaST Software Manager for a fine-grained package-based selection.
One or more of the following default patterns may not be needed in all cases:
Servers rarely need a full desktop environment. In case a graphical environment is needed, a more economical solution such as IceWM can be sufficient.
When solely administrating the server and its applications via command line, consider not installing this pattern. However, keep in mind that it is needed to run GUI applications from a remote machine. If your application is managed by a GUI or if you prefer the GUI version of YaST, keep this pattern.
This pattern is only needed if you want to print from the machine.
A running X Window System consumes many resources and is rarely needed on
a server. It is strongly recommended to start the system in target
multi-user.target. You will still be able to
remotely start graphical applications.
The default installation starts several services (the number varies with the installation scope). Since each service consumes resources, it is recommended to disable the ones not needed. Run › › to start the services management module.
If you are using the graphical version of YaST, you can click the column headlines to sort the list of services. Use this to get an overview of which services are currently running. Use the button to disable the service for the running session. To permanently disable it, use the button.
The following list shows services that are started by default after the installation of openSUSE Leap. Check which of the components you need, and disable the others:
Loads the Advanced Linux Sound System.
A daemon for the Audit system (see Part VI, “The Linux Audit Framework” for details). Disable this if you do not use Audit.
Handles cold plugging of Bluetooth dongles.
A printer daemon.
Enables the execution of *.class or
*.jar Java programs.
Services needed to mount NFS.
Services needed to mount SMB/CIFS file systems from a Windows* server.
Shows the splash screen on start-up.
Hard disks are the slowest components in a computer system and therefore often the cause for a bottleneck. Using the file system that best suits your workload helps to improve performance. Using special mount options or prioritizing a process's I/O priority are further means to speed up the system.
openSUSE Leap ships with several file systems, including Btrfs, Ext4, Ext3, Ext2, ReiserFS, and XFS. Each file system has its own advantages and disadvantages.
NFS (Version 3) tuning is covered in detail in the NFS Howto at
http://nfs.sourceforge.net/nfs-howto/. The first
thing to experiment with when mounting NFS shares is increasing the
read write blocksize to 32768 by using the mount
options wsize and rsize.
Each file and directory in a file system has three time stamps associated
with it: a time when the file was last read called access
time, a time when the file data was last modified called
modification time, and a time when the file metadata
was last modified called change time. Keeping access
time always up to date has significant performance overhead since every
read-only access will incur a write operation. Thus by default every file
system updates access time only if current file access time is older than a
day or if it is older than file modification or change time. This feature
is called relative access time and the corresponding
mount option is relatime. Updates of access time can be
completely disabled using the noatime mount option,
however you need to verify your applications do not use it. This can be
true for file and Web servers or for network storage. If the default
relative access time update policy is not suitable for your applications,
use the strictatime mount option.
Some file systems (for example Ext4) also support lazy time stamp updates.
When this feature is enabled using the lazytime mount
option, updates of all time stamps happen in memory but they are not
written to disk. That happens only in response to
fsync or sync system
calls, when the file information is written due to another reason such as
file size update, when time stamps are older than 24 hours, or when cached
file information needs to be evicted from memory.
To update mount options used for a file system, either edit
/etc/fstab directly, or use the dialog when editing or adding a partition with the
YaST Partitioner.
ionice #
The ionice command lets you prioritize disk access
for single processes. This enables you to give less I/O priority to
background processes with heavy disk access that are not time-critical,
such as backup jobs. ionice also lets you raise the
I/O priority for a specific process to make sure this process always has
immediate access to the disk. The caveat of this feature is that standard
writes are cached in the page cache and are written back to persistent
storage only later by an independent kernel process. Thus the I/O priority
setting generally does not apply for these writes. Also be aware that
I/O class and priority setting is obeyed only by CFQ
I/O scheduler (refer to Section 12.2, “Available I/O Elevators”). You
can set the following three scheduling classes:
A process from the idle scheduling class is only granted disk access when no other process has asked for disk I/O.
The default scheduling class used for any process that has not asked
for a specific I/O priority. Priority within this class can be
adjusted to a level from 0 to 7
(with 0 being the highest priority). Programs
running at the same best-effort priority are served in a round-robin
fashion. Some kernel versions treat priority within the best-effort
class differently—for details, refer to the
ionice(1) man page.
Processes in this class are always granted disk access first.
Fine-tune the priority level from 0 to
7 (with 0 being the highest
priority). Use with care, since it can starve other processes.
For more details and the exact command syntax refer to the
ionice(1) man page. If you need more reliable
control over bandwidth available to each application, use
Kernel Control Groups as described in
Section 9.3, “Control Group Subsystems”.
Kernel Control Groups (abbreviated known as “cgroups”) are a kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchical organized groups. These hierarchical groups can be configured to show a specialized behavior that helps with tuning the system to make best use of available hardware and network resources.
In the following sections, we often reference kernel documentation
such as /usr/src/linux/Documentation/cgroups/.
These files are part of the kernel-source
package.
This chapter is an overview. To use cgroups properly and to avoid performance implications, you must study the provided references.
The following terms are used in this chapter:
“cgroup” is another name for Control Groups.
In a cgroup there is a set of tasks (processes) associated with a set of subsystems that act as parameters constituting an environment for the tasks.
Subsystems provide the parameters that can be assigned and define CPU sets, freezer, or—more general—“resource controllers” for memory, disk I/O, network traffic, etc.
cgroups are organized in a tree-structured hierarchy. There can be more than one hierarchy in the system. You use a different or alternate hierarchy to cope with specific situations.
Every task running in the system is in exactly one of the cgroups in the hierarchy.
See the following resource planning scenario for a better understanding
(source:
/usr/src/linux/Documentation/cgroups/cgroups.txt):
Web browsers such as Firefox will be part of the Web network class, while the NFS daemons such as (k)nfsd will be part of the NFS network class. On the other side, Firefox will share appropriate CPU and memory classes depending on whether a professor or student started it.
The following subsystems are available:
cpuset,
cpu,
cpuacct,
memory,
devices,
freezer,
net_cls,
net_prio,
blkio,
perf_event, and
hugetlbt.
Either mount each subsystem separately, for example:
tux >sudomkdir /cpuset /cputux >sudomount -t cgroup -o cpuset none /cpusettux >sudomount -t cgroup -o cpu,cpuacct none /cpu
or all subsystems in one go; you can use an arbitrary device name (for example
none), which will appear in
/proc/mounts, for example:
tux >sudomount -t cgroup none /sys/fs/cgroup
Some additional information on available subsystems:
net_cls (Identification)The Network classifier cgroup helps with providing identification for controlling processes such as Traffic Controller (tc) or Netfilter (iptables). These controller tools can act on tagged network packets.
For more information, see
/usr/src/linux/Documentation/cgroups/net_cls.txt.
net_prio (Identification)The Network priority cgroup helps with setting the priority of network packets.
For more information, see
/usr/src/linux/Documentation/cgroups/net_prio.txt.
devices (Isolation)A system administrator can provide a list of devices that can be accessed by processes under cgroups.
It limits access to a device or a file system on a device to only
tasks that belong to the specified cgroup. For more information, see
/usr/src/linux/Documentation/cgroups/devices.txt.
freezer (Control)
The freezer subsystem is useful for
high-performance computing clusters (HPC clusters). Use it to
freeze (stop) all tasks in a group or to stop tasks, if they reach
a defined checkpoint. For more information, see
/usr/src/linux/Documentation/cgroups/freezer-subsystem.txt.
Here are basic commands to use the freezer subsystem:
mount -t cgroup -o freezer freezer /freezer # Create a child cgroup: mkdir /freezer/0 # Put a task into this cgroup: echo $task_pid > /freezer/0/tasks # Freeze it: echo FROZEN > /freezer/0/freezer.state # Unfreeze (thaw) it: echo THAWED > /freezer/0/freezer.state
perf_event (Control)
perf_event collects performance data.
cpuset (Isolation)
Use cpuset to tie processes to system subsets
of CPUs and memory (“memory nodes”). For an example,
see Section 9.4.2, “Example: Cpusets”.
cpuacct (Accounting)
The CPU accounting controller groups tasks using cgroups and accounts
the CPU usage of these groups. For more information, see
/usr/src/linux/Documentation/cgroups/cpuacct.txt.
memory (Resource Control)Tracking or limiting memory usage of user space processes.
Control swap usage by setting swapaccount=1 as a
kernel boot parameter.
Limit LRU (Least Recently Used) pages.
Anonymous and file cache.
No limits for kernel memory.
Maybe in another subsystem if needed.
Memory cgroup now offers a mechanism allowing easier workload
opt-in isolation. Memory cgroup can define its so called low
limit (memory.low_limit_in_bytes), which works
as a protection from memory pressure. For workloads that need to be
isolated from outside memory management activity, the value should be set
to the expected Resident Set Size (RSS) plus some head
room. If a memory pressure condition triggers on the system and
the particular group is still under its low limit, its memory is
protected from reclaim. As a result, workloads outside of the
cgroup do not need the aforementioned capping.
For more information, see
/usr/src/linux/Documentation/cgroups/memory.txt.
hugetlb (Resource Control)The HugeTLB controller manages the memory allocated to huge pages.
For more information, see
/usr/src/linux/Documentation/cgroups/hugetlb.txt.
cpu (Control)Share CPU bandwidth between groups with the group scheduling function of CFS (the scheduler). Mechanically complicated.
The Block IO controller is available as a disk I/O controller. With the blkio controller you can currently set policies for proportional bandwidth and for throttling.
These are the basic commands to configure proportional weight division
of bandwidth by setting weight values in
blkio.weight:
# Setup in /sys/fs/cgroup mkdir /sys/fs/cgroup/blkio mount -t cgroup -o blkio none /sys/fs/cgroup/blkio # Start two cgroups mkdir -p /sys/fs/cgroup/blkio/group1 /sys/fs/cgroup/blkio/group2 # Set weights echo 1000 > /sys/fs/cgroup/blkio/group1/blkio.weight echo 500 > /sys/fs/cgroup/blkio/group2/blkio.weight # Write the PIDs of the processes to be controlled to the # appropriate groups COMMAND1 & echo $! > /sys/fs/cgroup/blkio/group1/tasks COMMAND2 & echo $! > /sys/fs/cgroup/blkio/group2/tasks
These are the basic commands to configure throttling or upper limit
policy by setting values in
blkio.throttle.read_bps_device for reads and
blkio.throttle.write_bps_device for writes:
# Setup in /sys/fs/cgroup mkdir /sys/fs/cgroup/blkio mount -t cgroup -o blkio none /sys/fs/cgroup/blkio # Bandwidth rate of a device for the root group; format: # <major>:<minor> <byes_per_second> echo "8:16 1048576" > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device
For more information about caveats, usage scenarios, and additional
parameters, see
/usr/src/linux/Documentation/cgroups/blkio-controller.txt.
To conveniently use cgroups, install the following additional packages:
libcgroup-tools — basic user space tools
to simplify resource management
libcgroup1 — control groups
management library
cpuset — contains the
cset to manipulate cpusets
libcpuset1 — C API to cpusets
kernel-source — only needed for
documentation purposes
With the command line proceed as follows:
To determine the number of CPUs and memory nodes see
/proc/cpuinfo and
/proc/zoneinfo.
Create the cpuset hierarchy as a virtual file system (source:
/usr/src/linux/Documentation/cgroups/cpusets.txt):
mount -t cgroup -ocpuset cpuset /sys/fs/cgroup/cpuset cd /sys/fs/cgroup/cpuset mkdir Charlie cd Charlie # List of CPUs in this cpuset: echo 2-3 > cpuset.cpus # List of memory nodes in this cpuset: echo 1 > cpuset.mems echo $$ > tasks # The subshell 'sh' is now running in cpuset Charlie # The next line should display '/Charlie' cat /proc/self/cpuset
Remove the cpuset using shell commands:
rmdir /sys/fs/cgroup/cpuset/Charlie
This fails as long as this cpuset is in use. First, you must remove the inside cpusets or tasks (processes) that belong to it. Check it with:
cat /sys/fs/cgroup/cpuset/Charlie/tasks
For background information and additional configuration flags, see
/usr/src/linux/Documentation/cgroups/cpusets.txt.
With the cset tool, proceed as follows:
# Determine the number of CPUs and memory nodes cset set --list # Creating the cpuset hierarchy cset set --cpu=2-3 --mem=1 --set=Charlie # Starting processes in a cpuset cset proc --set Charlie --exec -- stress -c 1 & # Moving existing processes to a cpuset cset proc --move --pid PID --toset=Charlie # List task in a cpuset cset proc --list --set Charlie # Removing a cpuset cset set --destroy Charlie
Using shell commands, proceed as follows:
Create the cgroups hierarchy:
mount -t cgroup cgroup /sys/fs/cgroup cd /sys/fs/cgroup/cpuset/cgroup mkdir priority cd priority cat cpu.shares
Understanding cpu.shares:
1024 is the default (for more information, see
/Documentation/scheduler/sched-design-CFS.txt)
= 50% usage
1524 = 60% usage
2048 = 67% usage
512 = 40% usage
Changing cpu.shares
echo 1024 > cpu.shares
This is a simple example. Use the following in
/etc/cgconfig.conf:
group foo {
perm {
task {
uid = root;
gid = users;
fperm = 660;
}
admin {
uid = root;
gid = root;
fperm = 600;
dperm = 750;
}
}
}
mount {
cpu = /mnt/cgroups/cpu;
}
Then start the cgconfig service and stat
/mnt/cgroups/cpu/foo/tasks which should show the permissions
mask 660 with root as an owner and
users as a group. stat
/mnt/cgroups/cpu/foo/ should be 750 and all
files (but tasks) should have the mask
600. Note that fperm is applied on
top of existing file permissions as a mask.
For more information, see the cgconfig.conf man
page.
Kernel documentation (package kernel-source):
files in /usr/src/linux/Documentation/cgroups.
http://lwn.net/Articles/604609/—Brown, Neil: Control Groups Series (2014, 7 parts).
http://lwn.net/Articles/243795/—Corbet, Jonathan: Controlling memory use in containers (2007).
http://lwn.net/Articles/236038/—Corbet, Jonathan: Process containers (2007).
There are physical limitations to hardware that are encountered when many CPU and lots of memory are required. In this chapter, the important limitation is that there is limited communication bandwidth between the CPUs and the memory. One architecture modification that was introduced to address this is Non-Uniform Memory Access (NUMA).
In this configuration, there are multiple nodes. Each of the nodes contains a subset of all CPUs and memory. The access speed to main memory is determined by the location of the memory relative to the CPU. The performance of a workload depends on the application threads accessing data that is local to the CPU the thread is executing on. Automatic NUMA Balancing is a new feature of SLE 12. Automatic NUMA Balancing migrates data on demand to memory nodes that are local to the CPU accessing that data. Depending on the workload, this can dramatically boost performance when using NUMA hardware.
Automatic NUMA balancing happens in three basic steps:
A task scanner periodically scans a portion of a task's address space and marks the memory to force a page fault when the data is next accessed.
The next access to the data will result in a NUMA Hinting Fault. Based on this fault, the data can be migrated to a memory node associated with the task accessing the memory.
To keep a task, the CPU it is using and the memory it is accessing together, the scheduler groups tasks that share data.
The unmapping of data and page fault handling incurs overhead. However, commonly the overhead will be offset by threads accessing data associated with the CPU.
Static configuration has been the recommended way of tuning workloads on
NUMA hardware for some time. To do this, memory policies can be set with
numactl, taskset or
cpusets. NUMA-aware applications can use special APIs.
In cases where the static policies have already been created, automatic
NUMA balancing should be disabled as the data access should already be
local.
numactl --hardware will show the
memory configuration of the machine and whether it supports NUMA or not.
This is example output from a 4-node machine.
tux > numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 4 8 12 16 20 24 28 32 36 40 44
node 0 size: 16068 MB
node 0 free: 15909 MB
node 1 cpus: 1 5 9 13 17 21 25 29 33 37 41 45
node 1 size: 16157 MB
node 1 free: 15948 MB
node 2 cpus: 2 6 10 14 18 22 26 30 34 38 42 46
node 2 size: 16157 MB
node 2 free: 15981 MB
node 3 cpus: 3 7 11 15 19 23 27 31 35 39 43 47
node 3 size: 16157 MB
node 3 free: 16028 MB
node distances:
node 0 1 2 3
0: 10 20 20 20
1: 20 10 20 20
2: 20 20 10 20
3: 20 20 20 10
Automatic NUMA balancing can be enabled or disabled for the current
session by writing 1 or 0
to /proc/sys/kernel/numa_balancing which will
enable or disable the feature respectively. To permanently enable or
disable it, use the kernel command line option
numa_balancing=[enable|disable].
If Automatic NUMA Balancing is enabled, the task scanner behavior can be configured. The task scanner balances the overhead of Automatic NUMA Balancing with the amount of time it takes to identify the best placement of data.
numa_balancing_scan_delay_ms
The amount of CPU time a thread must consume before its data is scanned. This prevents creating overhead because of short-lived processes.
numa_balancing_scan_period_min_ms and
numa_balancing_scan_period_max_ms
Controls how frequently a task's data is scanned. Depending on the locality of the faults the scan rate will increase or decrease. These settings control the min and max scan rates.
numa_balancing_scan_size_mb
Controls how much address space is scanned when the task scanner is active.
The most important task is to assign metrics to your workload and measure
the performance with Automatic NUMA Balancing enabled and disabled to
measure the impact. Profiling tools can be used to monitor local and
remote memory accesses if the CPU supports such monitoring. Automatic
NUMA Balancing activity can be monitored via the following parameters in
/proc/vmstat:
numa_pte_updates
The amount of base pages that were marked for NUMA hinting faults.
numa_huge_pte_updates
The amount of transparent huge pages that were marked for NUMA hinting
faults. In combination with numa_pte_updates the
total address space that was marked can be calculated.
numa_hint_faults
Records how many NUMA hinting faults were trapped.
numa_hint_faults_local
Shows how many of the hinting faults were to local nodes. In
combination with numa_hint_faults, the percentage
of local versus remote faults can be calculated. A high percentage of
local hinting faults indicates that the workload is closer to being
converged.
numa_pages_migrated
Records how many pages were migrated because they were misplaced. As migration is a copying operation, it contributes the largest part of the overhead created by NUMA balancing.
The following illustrates a simple test case of a 4-node NUMA machine running the SpecJBB 2005 using a single instance of the JVM with no static tuning around memory policies. Note, however, that the impact for each workload will vary and that this example is based on a pre-release version of openSUSE Leap 12.
Balancing disabled Balancing enabled
TPut 1 26629.00 ( 0.00%) 26507.00 ( -0.46%)
TPut 2 55841.00 ( 0.00%) 53592.00 ( -4.03%)
TPut 3 86078.00 ( 0.00%) 86443.00 ( 0.42%)
TPut 4 116764.00 ( 0.00%) 113272.00 ( -2.99%)
TPut 5 143916.00 ( 0.00%) 141581.00 ( -1.62%)
TPut 6 166854.00 ( 0.00%) 166706.00 ( -0.09%)
TPut 7 195992.00 ( 0.00%) 192481.00 ( -1.79%)
TPut 8 222045.00 ( 0.00%) 227143.00 ( 2.30%)
TPut 9 248872.00 ( 0.00%) 250123.00 ( 0.50%)
TPut 10 270934.00 ( 0.00%) 279314.00 ( 3.09%)
TPut 11 297217.00 ( 0.00%) 301878.00 ( 1.57%)
TPut 12 311021.00 ( 0.00%) 326048.00 ( 4.83%)
TPut 13 324145.00 ( 0.00%) 346855.00 ( 7.01%)
TPut 14 345973.00 ( 0.00%) 378741.00 ( 9.47%)
TPut 15 354199.00 ( 0.00%) 394268.00 ( 11.31%)
TPut 16 378016.00 ( 0.00%) 426782.00 ( 12.90%)
TPut 17 392553.00 ( 0.00%) 437772.00 ( 11.52%)
TPut 18 396630.00 ( 0.00%) 456715.00 ( 15.15%)
TPut 19 399114.00 ( 0.00%) 484020.00 ( 21.27%)
TPut 20 413907.00 ( 0.00%) 493618.00 ( 19.26%)
TPut 21 413173.00 ( 0.00%) 510386.00 ( 23.53%)
TPut 22 420256.00 ( 0.00%) 521016.00 ( 23.98%)
TPut 23 425581.00 ( 0.00%) 536214.00 ( 26.00%)
TPut 24 429052.00 ( 0.00%) 532469.00 ( 24.10%)
TPut 25 426127.00 ( 0.00%) 526548.00 ( 23.57%)
TPut 26 422428.00 ( 0.00%) 531994.00 ( 25.94%)
TPut 27 424378.00 ( 0.00%) 488340.00 ( 15.07%)
TPut 28 419338.00 ( 0.00%) 543016.00 ( 29.49%)
TPut 29 403347.00 ( 0.00%) 529178.00 ( 31.20%)
TPut 30 408681.00 ( 0.00%) 510621.00 ( 24.94%)
TPut 31 406496.00 ( 0.00%) 499781.00 ( 22.95%)
TPut 32 404931.00 ( 0.00%) 502313.00 ( 24.05%)
TPut 33 397353.00 ( 0.00%) 522418.00 ( 31.47%)
TPut 34 382271.00 ( 0.00%) 491989.00 ( 28.70%)
TPut 35 388965.00 ( 0.00%) 493012.00 ( 26.75%)
TPut 36 374702.00 ( 0.00%) 502677.00 ( 34.15%)
TPut 37 367578.00 ( 0.00%) 500588.00 ( 36.19%)
TPut 38 367121.00 ( 0.00%) 496977.00 ( 35.37%)
TPut 39 355956.00 ( 0.00%) 489430.00 ( 37.50%)
TPut 40 350855.00 ( 0.00%) 487802.00 ( 39.03%)
TPut 41 345001.00 ( 0.00%) 468021.00 ( 35.66%)
TPut 42 336177.00 ( 0.00%) 462260.00 ( 37.50%)
TPut 43 329169.00 ( 0.00%) 467906.00 ( 42.15%)
TPut 44 329475.00 ( 0.00%) 470784.00 ( 42.89%)
TPut 45 323845.00 ( 0.00%) 450739.00 ( 39.18%)
TPut 46 323878.00 ( 0.00%) 435457.00 ( 34.45%)
TPut 47 310524.00 ( 0.00%) 403914.00 ( 30.07%)
TPut 48 311843.00 ( 0.00%) 459017.00 ( 47.19%)
Balancing Disabled Balancing Enabled
Expctd Warehouse 48.00 ( 0.00%) 48.00 ( 0.00%)
Expctd Peak Bops 310524.00 ( 0.00%) 403914.00 ( 30.07%)
Actual Warehouse 25.00 ( 0.00%) 29.00 ( 16.00%)
Actual Peak Bops 429052.00 ( 0.00%) 543016.00 ( 26.56%)
SpecJBB Bops 6364.00 ( 0.00%) 9368.00 ( 47.20%)
SpecJBB Bops/JVM 6364.00 ( 0.00%) 9368.00 ( 47.20%)Automatic NUMA Balancing simplifies tuning workloads for high performance on NUMA machines. Where possible, it is still recommended to statically tune the workload to partition it within each node. However, in all other cases, automatic NUMA balancing should boost performance.
Power management aims at reducing operating costs for energy and cooling systems while at the same time keeping the performance of a system at a level that matches the current requirements. Thus, power management is always a matter of balancing the actual performance needs and power saving options for a system. Power management can be implemented and used at different levels of the system. A set of specifications for power management functions of devices and the operating system interface to them has been defined in the Advanced Configuration and Power Interface (ACPI). As power savings in server environments can primarily be achieved at the processor level, this chapter introduces some main concepts and highlights some tools for analyzing and influencing relevant parameters.
At the CPU level, you can control power usage in various ways. For example by using idling power states (C-states), changing CPU frequency (P-states), and throttling the CPU (T-states). The following sections give a short introduction to each approach and its significance for power savings. Detailed specifications can be found at http://www.acpi.info/spec.htm.
Modern processors have several power saving modes called
C-states. They reflect the capability of an idle
processor to turn off unused components to save power.
When a processor is in the C0 state, it is executing
instructions. A processor running in any other C-state is idle. The
higher the C number, the deeper the CPU sleep mode: more components are
shut down to save power. Deeper sleep states can save large amounts of
energy. Their downside is that they introduce latency. This means, it
takes more time for the CPU to go back to C0.
Depending on workload (threads waking up, triggering CPU usage and then
going back to sleep again for a short period of time) and hardware (for
example, interrupt activity of a network device), disabling the deepest
sleep states can significantly increase overall performance. For details
on how to do so, refer to
Section 11.3.2, “Viewing Kernel Idle Statistics with cpupower”.
Some states also have submodes with different power saving latency
levels. Which C-states and submodes are supported depends on the
respective processor. However, C1 is always
available.
Table 11.1, “C-States” gives an overview of the most common C-states.
|
Mode |
Definition |
|---|---|
|
C0 |
Operational state. CPU fully turned on. |
|
C1 |
First idle state. Stops CPU main internal clocks via software. Bus interface unit and APIC are kept running at full speed. |
|
C2 |
Stops CPU main internal clocks via hardware. State in which the processor maintains all software-visible states, but may take longer to wake up through interrupts. |
|
C3 |
Stops all CPU internal clocks. The processor does not need to keep its cache coherent, but maintains other states. Some processors have variations of the C3 state that differ in how long it takes to wake the processor through interrupts. |
To avoid needless power consumption, it is recommended to test your
workloads with deep sleep states enabled versus deep sleep states
disabled. For more information, refer to
Section 11.3.2, “Viewing Kernel Idle Statistics with cpupower” or the
cpupower-idle-set(1) man page.
While a processor operates (in C0 state), it can be in one of several
CPU performance states (P-states). Whereas C-states
are idle states (all but C0), P-states are
operational states that relate to CPU frequency and voltage.
The higher the P-state, the lower the frequency and voltage at which the
processor runs. The number of P-states is processor-specific and the
implementation differs across the various types. However,
P0 is always the highest-performance state (except for Section 11.1.3, “Turbo Features”). Higher
P-state numbers represent slower processor speeds and lower power
consumption. For example, a processor in P3 state runs
more slowly and uses less power than a processor running in the
P1 state. To operate at any P-state, the processor
must be in the C0 state, which means that it is
working and not idling. The CPU P-states are also defined in the ACPI
specification, see http://www.acpi.info/spec.htm.
C-states and P-states can vary independently of one another.
Turbo features allow to dynamically overtick active CPU
cores while other cores are in deep sleep states. This increases the performance
of active threads while still
complying with Thermal Design Power (TDP) limits.
However, the conditions under which a CPU core can use turbo frequencies
are architecture-specific. Learn how to evaluate the efficiency of those
new features in Section 11.3, “The cpupower Tools”.
The in-kernel governors belong to the Linux kernel CPUfreq infrastructure and can be used to dynamically scale processor frequencies at runtime. You can think of the governors as a sort of preconfigured power scheme for the CPU. The CPUfreq governors use P-states to change frequencies and lower power consumption. The dynamic governors can switch between CPU frequencies, based on CPU usage, to allow for power savings while not sacrificing performance.
The following governors are available with the CPUfreq subsystem:
The CPU frequency is statically set to the highest possible for maximum performance. Consequently, saving power is not the focus of this governor.
The CPU frequency is statically set to the lowest possible. This can
have severe impact on the performance, as the system will never rise
above this frequency no matter how busy the processors are. An important
exception is the intel_pstate which defaults to the
powersave mode. This is due to a hardware-specific
decision but functionally it operates similarly to the
on-demand governor.
However, using this governor often does not lead to the expected power savings as the highest savings can usually be achieved at idle through entering C-states. With the powersave governor, processes run at the lowest frequency and thus take longer to finish. This means it takes longer until the system can go into an idle C-state.
Tuning options: The range of minimum frequencies available to the
governor can be adjusted (for example, with the
cpupower command line tool).
The kernel implementation of a dynamic CPU frequency policy: The governor monitors the processor usage. When it exceeds a certain threshold, the governor will set the frequency to the highest available. If the usage is less than the threshold, the next lowest frequency is used. If the system continues to be underemployed, the frequency is again reduced until the lowest available frequency is set.
Not all drivers use the in-kernel governors to dynamically scale power frequency at
runtime. For example, the intel_pstate driver adjusts power frequency itself. Use
the cpupower frequency-info command to find out which driver your system
uses.
cpupower Tools #The cpupower tools are designed to give an overview
of all CPU power-related parameters that are supported
on a given machine, including turbo (or boost) states. Use the tool set to
view and modify settings of the kernel-related CPUfreq and cpuidle systems
and other settings not related to frequency scaling or idle states. The
integrated monitoring framework can access both kernel-related parameters
and hardware statistics. Therefore, it is ideally suited for performance
benchmarks. It also helps you to identify the dependencies between turbo and
idle states.
After installing the cpupower package, view the
available cpupower subcommands with
cpupower --help. Access the general man page with
man cpupower, and the man pages of the subcommands with
man cpupower-SUBCOMMAND.
cpupower #
The cpupower frequency-info command shows the
statistics of the cpufreq driver used in the kernel. Additionally, it
shows if turbo (boost) states are supported and enabled in the BIOS.
Run without any options, it shows an output similar to the following:
cpupower frequency-info #root # cpupower frequency-info
analyzing CPU 0:
driver: intel_pstate
CPUs which run at the same hardware frequency: 0
CPUs which need to have their frequency coordinated by software: 0
maximum transition latency: 0.97 ms.
hardware limits: 1.20 GHz - 3.80 GHz
available cpufreq governors: performance, powersave
current policy: frequency should be within 1.20 GHz and 3.80 GHz.
The governor "powersave" may decide which speed to use
within this range.
current CPU frequency is 3.40 GHz (asserted by call to hardware).
boost state support:
Supported: yes
Active: yes
3500 MHz max turbo 4 active cores
3600 MHz max turbo 3 active cores
3600 MHz max turbo 2 active cores
3800 MHz max turbo 1 active cores
To get the current values for all CPUs, use
cpupower -c all frequency-info.
cpupower #
The idle-info subcommand shows the statistics of the
cpuidle driver used in the kernel. It works on all architectures that
use the cpuidle kernel framework.
cpupower idle-info #root # cpupower idle-info
CPUidle driver: intel_idle
CPUidle governor: menu
Analyzing CPU 0:
Number of idle states: 6
Available idle states: POLL C1-SNB C1E-SNB C3-SNB C6-SNB C7-SNB
POLL:
Flags/Description: CPUIDLE CORE POLL IDLE
Latency: 0
Usage: 163128
Duration: 17585669
C1-SNB:
Flags/Description: MWAIT 0x00
Latency: 2
Usage: 16170005
Duration: 697658910
C1E-SNB:
Flags/Description: MWAIT 0x01
Latency: 10
Usage: 4421617
Duration: 757797385
C3-SNB:
Flags/Description: MWAIT 0x10
Latency: 80
Usage: 2135929
Duration: 735042875
C6-SNB:
Flags/Description: MWAIT 0x20
Latency: 104
Usage: 53268
Duration: 229366052
C7-SNB:
Flags/Description: MWAIT 0x30
Latency: 109
Usage: 62593595
Duration: 324631233978
After finding out which processor idle states are supported with
cpupower idle-info, individual states can be
disabled using the cpupower idle-set command.
Typically one wants to disable the deepest sleep state, for example:
root # cpupower idle-set -d 5Or, for disabling all CPUs with latencies equal to or higher than 80:
root # cpupower idle-set -D 80cpupower #
Use the monitor subcommand to report processor topology, and monitor frequency
and idle power state statistics over a certain period of time. The
default interval is 1 second, but it can be changed
with the -i. Independent processor sleep states and
frequency counters are implemented in the tool—some retrieved
from kernel statistics, others reading out hardware registers. The
available monitors depend on the underlying hardware and the system.
List them with cpupower monitor -l.
For a description of the individual monitors, refer to the
cpupower-monitor man page.
The monitor subcommand allows you to execute
performance benchmarks. To compare kernel statistics with hardware
statistics for specific workloads, concatenate the respective command, for example:
cpupowermonitordb_test.sh
cpupower monitor Output #root # cpupower monitor
|Mperf || Idle_Stats
1 2
CPU | C0 | Cx | Freq || POLL | C1 | C2 | C3
0| 3.71| 96.29| 2833|| 0.00| 0.00| 0.02| 96.32
1| 100.0| -0.00| 2833|| 0.00| 0.00| 0.00| 0.00
2| 9.06| 90.94| 1983|| 0.00| 7.69| 6.98| 76.45
3| 7.43| 92.57| 2039|| 0.00| 2.60| 12.62| 77.52
Mperf shows the average frequency of a CPU, including boost
frequencies, over time. Additionally, it shows the
percentage of time the CPU has been active ( | |
Idle_Stats shows the statistics of the cpuidle kernel subsystem. The kernel updates these values every time an idle state is entered or left. Therefore there can be some inaccuracy when cores are in an idle state for some time when the measure starts or ends. |
Apart from the (general) monitors in the example above, other
architecture-specific monitors are available. For detailed
information, refer to the cpupower-monitor man
page.
By comparing the values of the individual monitors, you can find
correlations and dependencies and evaluate how well the power saving
mechanism works for a certain workload. In
Example 11.3 you can
see that CPU 0 is idle (the value of
Cx is near 100%), but runs at a very high frequency.
This is because the CPUs 0 and 1
have the same frequency values which means that there is a dependency
between them.
cpupower #
You can use
cpupower frequency-set command as root to
modify current settings. It allows you to set values for the minimum or
maximum CPU frequency the governor may select or to create a new
governor. With the -c option, you can also specify for
which of the processors the settings should be modified. That makes it
easy to use a consistent policy across all processors without adjusting
the settings for each processor individually. For more details and the
available options, see the man page
cpupower-frequency-set or run
cpupower frequency-set
--help.
You can monitor system power consumption with powerTOP. It helps you to identify the reasons for unnecessary high power consumption (for example, processes that are mainly responsible for waking up a processor from its idle state) and to optimize your system settings to avoid these. It supports both Intel and AMD processors.
powerTOP combines various sources of information (analysis of programs, device drivers, kernel options, amounts and sources of interrupts waking up processors from sleep states) and shows them in one screen. Example 11.4, “Example powerTOP Output” shows which information categories are available:
Cn Avg residency P-states (frequencies) 1 2 3 4 5 C0 (cpu running) (11.6%) 2.00 Ghz 0.1% polling 0.0ms ( 0.0%) 2.00 Ghz 0.0% C1 4.4ms (57.3%) 1.87 Ghz 0.0% C2 10.0ms (31.1%) 1064 Mhz 99.9% Wakeups-from-idle per second : 11.2 interval: 5.0s 6 no ACPI power usage estimate available 7 Top causes for wakeups: 8 96.2% (826.0) <interrupt> : extra timer interrupt 0.9% ( 8.0) <kernel core> : usb_hcd_poll_rh_status (rh_timer_func) 0.3% ( 2.4) <interrupt> : megasas 0.2% ( 2.0) <kernel core> : clocksource_watchdog (clocksource_watchdog) 0.2% ( 1.6) <interrupt> : eth1-TxRx-0 0.1% ( 1.0) <interrupt> : eth1-TxRx-4 [...] Suggestion: 9 Enable SATA ALPM link power management via: echo min_power > /sys/class/scsi_host/host0/link_power_management_policy or press the S key.
The column shows the C-states. When working, the CPU is in state
| |
The column shows average time in milliseconds spent in the particular C-state. | |
The column shows the percentages of time spent in various C-states. For considerable power savings during idle, the CPU should be in deeper C-states most of the time. In addition, the longer the average time spent in these C-states, the more power is saved. | |
The column shows the frequencies the processor and kernel driver support on your system. | |
The column shows the amount of time the CPU cores stayed in different frequencies during the measuring period. | |
Shows how often the CPU is awoken per second (number of interrupts).
The lower the number, the better. The | |
When running powerTOP on a laptop, this line displays the ACPI information on how much power is currently being used and the estimated time until discharge of the battery. On servers, this information is not available. | |
Shows what is causing the system to be more active than needed. powerTOP displays the top items causing your CPU to awake during the sampling period. | |
Suggestions on how to improve power usage for this machine. |
For more information, refer to the powerTOP project page at https://01.org/powertop.
The following sections highlight important settings.
The CPUfreq subsystem offers several tuning options for P-states: You can switch between the different governors, influence minimum or maximum CPU frequency to be used or change individual governor parameters.
To switch to another governor at runtime, use
cpupower frequency-set with the -g option. For
example, running the following command (as root) will activate the
performance governor:
root # cpupower frequency-set -g performance
To set values for the minimum or maximum CPU frequency the governor may
select, use the -d or -u option,
respectively.
To use C-states or P-states, check your BIOS options:
To use C-states, make sure to enable CPU C State
or similar options to benefit from power savings at idle.
To use P-states and the CPUfreq governors, make sure to enable
Processor Performance States options or similar.
Even if P-states and C-states are available, it is possible that the
platform firmware is managing CPU frequencies which may be sub-optimal.
For example, if pcc-cpufreq is loaded then the
OS is only giving hints to the firmware, which is free to ignore the
hints. This can be addressed by selecting "OS Management" or similar
for CPU frequency managed in the BIOS. After reboot, an alternative
driver will be used but the performance impact should be carefully
measured.
In case of a CPU upgrade, make sure to upgrade your BIOS, too. The BIOS needs to know the new CPU and its frequency stepping to pass this information on to the operating system.
Check the systemd journal (see Chapter 11, journalctl: Query the systemd Journal)
for any output regarding the CPUfreq subsystem. Only severe
errors are reported there.
If you suspect problems with the CPUfreq subsystem on your
machine, you can also enable additional debug output. To do so, either
use cpufreq.debug=7 as boot parameter or execute
the following command as root:
root # echo 7 > /sys/module/cpufreq/parameters/debug
This will cause CPUfreq to log more information to
dmesg on state transitions, which is useful for
diagnosis. But as this additional output of kernel messages can be
rather comprehensive, use it only if you are fairly sure that a
problem exists.
Platforms with a Baseboard Management Controller (BMC) may have additional power management configuration options accessible via the service processor. These configurations are vendor specific and therefore not subject of this guide. For more information, refer to the manuals provided by your vendor.
I/O scheduling controls how input/output operations will be submitted to
storage. openSUSE Leap offers various I/O algorithms—called
elevators—suiting different workloads.
Elevators can help to reduce seek operations and can prioritize I/O requests.
Modern operating systems, such as openSUSE® Leap, normally run many tasks at the same time. For example, you can be searching in a text file while receiving an e-mail and copying a big file to an external hard disk. These simple tasks require many additional processes to be run by the system. To pro…
To understand and tune the memory management behavior of the kernel, it is important to first have an overview of how it works and cooperates with other subsystems.
The network subsystem is complex and its tuning highly depends on the system use scenario and on external factors such as software clients or hardware components (switches, routers, or gateways) in your network. The Linux kernel aims more at reliability and low latency than low overhead and high thr…
I/O scheduling controls how input/output operations will be submitted to
storage. openSUSE Leap offers various I/O algorithms—called
elevators—suiting different workloads.
Elevators can help to reduce seek operations and can prioritize I/O requests.
Choosing the best suited I/O elevator not only depends on the workload, but on the hardware, too. Single ATA disk systems, SSDs, RAID arrays, or network storage systems, for example, each require different tuning strategies.
openSUSE Leap picks a default I/O scheduler at boot-time, which can be changed on the fly per block device. This makes it possible to set different algorithms, for example, for the device hosting the system partition and the device hosting a database.
The default I/O scheduler is chosen for each device based on whether the
device reports to be rotational disk or not. For non-rotational disks
DEADLINE I/O scheduler is picked.
Other devices default to
CFQ (Completely Fair Queuing).
To change this default, use the following boot parameter:
elevator=SCHEDULER
Replace SCHEDULER with one of the values
cfq, noop, or
deadline. See Section 12.2, “Available I/O Elevators”
for details.
To change the elevator for a specific device in the running system, run the following command:
tux >sudoecho SCHEDULER > /sys/block/DEVICE/queue/scheduler
Here, SCHEDULER is one of
cfq, noop, or deadline.
DEVICE is the block device
(sda for example). Note that this change will not
persist during reboot. For permanent I/O scheduler change for a particular
device either place the command switching the I/O scheduler into init
scripts or add appropriate udev rule into
/lib/udev/rules.d/. See
/lib/udev/rules.d/60-ssd-scheduler.rules for an example
of such tuning.
In the following elevators available on openSUSE Leap are listed. Each elevator has a set of tunable parameters, which can be set with the following command:
tux >sudoecho VALUE > /sys/block/DEVICE/queue/iosched/TUNABLE
where VALUE is the desired value for the TUNABLE and DEVICE the block device.
To find out which elevator is the current default, run the following command. The currently selected scheduler is listed in brackets:
jupiter:~ # cat /sys/block/sda/queue/scheduler noop deadline [cfq]
This file can also contain the string none meaning that
I/O scheduling does not happen for this device. This is usually because the
device uses multi-queue queuing mechanism (refer to Section 12.4, “Enable blk-mq I/O Path for SCSI by Default”).
CFQ (Completely Fair Queuing) #
CFQ is a fairness-oriented
scheduler and is used by default on openSUSE Leap. The algorithm
assigns each thread a time slice in which it is allowed to submit I/O to
disk. This way each thread gets a fair share of I/O throughput. It also
allows assigning tasks I/O priorities which are taken into account
during scheduling decisions (see
Section 8.3.3, “Prioritizing Disk Access with ionice”). The
CFQ scheduler has the
following tunable parameters:
/sys/block/DEVICE/queue/iosched/slice_idle_us
When a task has no more I/O to submit in its time slice, the I/O
scheduler waits for a while before scheduling the next thread.
The slice_idle_us is
the time in microseconds the I/O scheduler waits. File
slice_idle controls the same tunable but in
millisecond units. Waiting for more I/O from a thread can
improve locality of I/O. Additionally, it avoids starving processes
doing dependent I/O.
A process does dependent I/O if it needs a result of one I/O
to submit another I/O. For example, if you first need to read an index
block to find out a data block to read, these two reads form
a dependent I/O.
For media where locality does not play a big role (SSDs, SANs
with lots of disks) setting /sys/block/<device>/queue/iosched/slice_idle_us
to 0 can improve the throughput considerably.
/sys/block/DEVICE/queue/iosched/quantum
This option limits the maximum number of requests that are being
processed at once by the device. The default value is
4. For a storage with several disks, this setting
can unnecessarily limit parallel processing of requests. Therefore,
increasing the value can improve performance. However, it can also
cause latency of certain I/O operations to increase because more
requests are buffered inside the storage. When changing this value,
you can also consider tuning
/sys/block/DEVICE/queue/iosched/slice_async_rq
(the default value is 2). This limits the maximum
number of asynchronous requests—usually write
requests—that are submitted in one time slice.
/sys/block/DEVICE/queue/iosched/low_latency
When enabled (which is the default on openSUSE Leap) the scheduler
may dynamically adjust the length of the time slice by aiming to meet
a tuning parameter called the target_latency. Time
slices are recomputed to meet this target_latency
and ensure that processes get fair access within a bounded length of
time.
/sys/block/DEVICE/queue/iosched/target_latency
Contains an estimated latency time for the
CFQ.
CFQ will use it to
calculate the time slice used for every task.
/sys/block/DEVICE/queue/iosched/group_idle_usTo avoid starving of blkio cgroups doing dependent I/O, CFQ
waits a bit after completion of I/O for one blkio cgroup before
scheduling I/O for a different blkio cgroup. When
slice_idle_us is set, this parameter does not
have a big impact. However, for fast media, the overhead of
slice_idle_us is generally undesirable.
Disabling slice_idle_us and setting
group_idle_us is a method to avoid starvation
of blkio cgroups doing dependent I/O with lower overhead. Note that
the file group_idle controls the same tunable
however with millisecond granularity.
CFQ #
In openSUSE Leap 42.3, the low_latency
tuning parameter is enabled by default to ensure that processes get fair
access within a bounded length of time. (Note that this parameter was not
enabled in versions prior to openSUSE Leap.)
This is usually preferred in a server scenario where processes are executing I/O as part of transactions, as it makes the time needed for each transaction predictable. However, there are scenarios where that is not the desired behavior:
If the performance metric of interest is the peak performance of a single process when there is I/O contention.
If a workload must complete as quickly as possible and there are multiple sources of I/O. In this case, unfair treatment from the I/O scheduler may allow the transactions to complete faster: Processes take their full slice and exit quickly, resulting in reduced overall contention.
To address this, there are two options—increase
target_latency or disable
low_latency. As with all tuning parameters it is
important to verify your workload behaves as expected before and after
the tuning modification. Take careful note of whether your workload
depends on individual process peak performance or scales better with
fairness. It should also be noted that the performance will depend on
the underlying storage and the correct tuning option for one
installation may not be universally true.
Find below an example that does not control when I/O starts but is
simple enough to demonstrate the point. 32 processes are writing a
small amount of data to disk in parallel. Using the openSUSE Leap
default (enabling low_latency), the result looks as
follows:
root #echo 1 > /sys/block/sda/queue/iosched/low_latencyroot #time ./dd-test.sh 10485760 bytes (10 MB) copied, 2.62464 s, 4.0 MB/s 10485760 bytes (10 MB) copied, 3.29624 s, 3.2 MB/s 10485760 bytes (10 MB) copied, 3.56341 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.56908 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.53043 s, 3.0 MB/s 10485760 bytes (10 MB) copied, 3.57511 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.53672 s, 3.0 MB/s 10485760 bytes (10 MB) copied, 3.5433 s, 3.0 MB/s 10485760 bytes (10 MB) copied, 3.65474 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.63694 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.90122 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 3.88507 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 3.86135 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 3.84553 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 3.88871 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 3.94943 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 4.12731 s, 2.5 MB/s 10485760 bytes (10 MB) copied, 4.15106 s, 2.5 MB/s 10485760 bytes (10 MB) copied, 4.21601 s, 2.5 MB/s 10485760 bytes (10 MB) copied, 4.35004 s, 2.4 MB/s 10485760 bytes (10 MB) copied, 4.33387 s, 2.4 MB/s 10485760 bytes (10 MB) copied, 4.55434 s, 2.3 MB/s 10485760 bytes (10 MB) copied, 4.52283 s, 2.3 MB/s 10485760 bytes (10 MB) copied, 4.52682 s, 2.3 MB/s 10485760 bytes (10 MB) copied, 4.56176 s, 2.3 MB/s 10485760 bytes (10 MB) copied, 4.62727 s, 2.3 MB/s 10485760 bytes (10 MB) copied, 4.78958 s, 2.2 MB/s 10485760 bytes (10 MB) copied, 4.79772 s, 2.2 MB/s 10485760 bytes (10 MB) copied, 4.78004 s, 2.2 MB/s 10485760 bytes (10 MB) copied, 4.77994 s, 2.2 MB/s 10485760 bytes (10 MB) copied, 4.86114 s, 2.2 MB/s 10485760 bytes (10 MB) copied, 4.88062 s, 2.1 MB/s real 0m4.978s user 0m0.112s sys 0m1.544s
Note that each process completes in similar times. This is the
CFQ scheduler meeting its
target_latency: Each process has fair access
to storage.
Note that the earlier processes complete somewhat faster. This happens because the start time of the processes is not identical. In a more complicated example, it is possible to control for this.
This is what happens when low_latency is disabled:
root #echo 0 > /sys/block/sda/queue/iosched/low_latencyroot #time ./dd-test.sh 10485760 bytes (10 MB) copied, 0.813519 s, 12.9 MB/s 10485760 bytes (10 MB) copied, 0.788106 s, 13.3 MB/s 10485760 bytes (10 MB) copied, 0.800404 s, 13.1 MB/s 10485760 bytes (10 MB) copied, 0.816398 s, 12.8 MB/s 10485760 bytes (10 MB) copied, 0.959087 s, 10.9 MB/s 10485760 bytes (10 MB) copied, 1.09563 s, 9.6 MB/s 10485760 bytes (10 MB) copied, 1.18716 s, 8.8 MB/s 10485760 bytes (10 MB) copied, 1.27661 s, 8.2 MB/s 10485760 bytes (10 MB) copied, 1.46312 s, 7.2 MB/s 10485760 bytes (10 MB) copied, 1.55489 s, 6.7 MB/s 10485760 bytes (10 MB) copied, 1.64277 s, 6.4 MB/s 10485760 bytes (10 MB) copied, 1.78196 s, 5.9 MB/s 10485760 bytes (10 MB) copied, 1.87496 s, 5.6 MB/s 10485760 bytes (10 MB) copied, 1.9461 s, 5.4 MB/s 10485760 bytes (10 MB) copied, 2.08351 s, 5.0 MB/s 10485760 bytes (10 MB) copied, 2.28003 s, 4.6 MB/s 10485760 bytes (10 MB) copied, 2.42979 s, 4.3 MB/s 10485760 bytes (10 MB) copied, 2.54564 s, 4.1 MB/s 10485760 bytes (10 MB) copied, 2.6411 s, 4.0 MB/s 10485760 bytes (10 MB) copied, 2.75171 s, 3.8 MB/s 10485760 bytes (10 MB) copied, 2.86162 s, 3.7 MB/s 10485760 bytes (10 MB) copied, 2.98453 s, 3.5 MB/s 10485760 bytes (10 MB) copied, 3.13723 s, 3.3 MB/s 10485760 bytes (10 MB) copied, 3.36399 s, 3.1 MB/s 10485760 bytes (10 MB) copied, 3.60018 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.58151 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.67385 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.69471 s, 2.8 MB/s 10485760 bytes (10 MB) copied, 3.66658 s, 2.9 MB/s 10485760 bytes (10 MB) copied, 3.81495 s, 2.7 MB/s 10485760 bytes (10 MB) copied, 4.10172 s, 2.6 MB/s 10485760 bytes (10 MB) copied, 4.0966 s, 2.6 MB/s real 0m3.505s user 0m0.160s sys 0m1.516s
Note that the time processes take to complete is spread much wider as processes are not getting fair access. Some processes complete faster and exit, allowing the total workload to complete faster, and some processes measure higher apparent I/O performance. It is also important to note that this example may not behave similarly on all systems as the results depend on the resources of the machine and the underlying storage.
It is important to emphasize that neither tuning option is inherently better than the other. Both are best in different circumstances and it is important to understand the requirements of your workload and tune accordingly.
NOOP #A trivial scheduler that only passes down the I/O that comes to it. Useful for checking whether complex I/O scheduling decisions of other schedulers are causing I/O performance regressions.
This scheduler is recommended for setups with devices that do I/O scheduling themselves, such as intelligent storage or in multipathing environments. If you choose a more complicated scheduler on the host, the scheduler of the host and the scheduler of the storage device compete with each other. This can decrease performance. The storage device can usually determine best how to schedule I/O.
For similar reasons, this scheduler is also recommended for use within virtual machines.
The NOOP scheduler can be
useful for devices that do not depend on mechanical movement, like SSDs.
Usually, the
DEADLINE I/O scheduler is a
better choice for these devices. However,
NOOP creates less overhead and
thus can on certain workloads increase performance.
DEADLINE #
DEADLINE is a latency-oriented
I/O scheduler. Each I/O request is assigned a deadline. Usually,
requests are stored in queues (read and write) sorted by sector numbers.
The DEADLINE algorithm
maintains two additional queues (read and write) in which requests are
sorted by deadline. As long as no request has timed out, the
“sector” queue is used. When timeouts occur, requests from
the “deadline” queue are served until there are no more
expired requests. Generally, the algorithm prefers reads over writes.
This scheduler can provide a superior throughput over the
CFQ I/O scheduler in cases
where several threads read and write and fairness is not an issue. For
example, for several parallel readers from a SAN and for databases
(especially when using “TCQ” disks). The
DEADLINE scheduler has the
following tunable parameters:
/sys/block/<device>/queue/iosched/writes_starved
Controls how many reads can be sent to disk before it is possible to
send writes. A value of 3 means, that three read
operations are carried out for one write operation.
/sys/block/<device>/queue/iosched/read_expire
Sets the deadline (current time plus the read_expire value) for read operations in milliseconds. The default is 500.
/sys/block/<device>/queue/iosched/write_expire
/sys/block/<device>/queue/iosched/read_expire
Sets the deadline (current time plus the read_expire value) for read
operations in milliseconds. The default is 500.
Most file systems (such as XFS, Ext3, or Ext4) send write barriers to disk after fsync or during transaction commits. Write barriers enforce proper ordering of writes, making volatile disk write caches safe to use (at some performance penalty). If your disks are battery-backed in one way or another, disabling barriers can safely improve performance.
Sending write barriers can be disabled using the
nobarrier mount option.
Disabling barriers when disks cannot guarantee caches are properly written in case of power failure can lead to severe file system corruption and data loss.
Block multiqueue (blk-mq) is a multi-queue block I/O queuing mechanism. Blk-mq uses per-cpu software queues to queue I/O requests. The software queues are mapped to one or more hardware submission queues. Blk-mq significantly reduces lock contention. In particular blk-mq improves performance for devices that support a high number of input/output operations per second (IOPS). Blk-mq is already the default for some devices, for example, NVM Express devices.
Currently blk-mq has no I/O scheduling support (no CFQ, no deadline I/O scheduler). This lack of I/O scheduling can cause significant performance degradation when spinning disks are used. Therefore blk-mq is not enabled by default for SCSI devices.
If you have fast SCSI devices (for example, SSDs) instead of SCSI
hard disks attached to your system, consider switching to
blk-mq for SCSI. This is done using the kernel command line option
scsi_mod.use_blk_mq=1.
Modern operating systems, such as openSUSE® Leap, normally run many tasks at the same time. For example, you can be searching in a text file while receiving an e-mail and copying a big file to an external hard disk. These simple tasks require many additional processes to be run by the system. To provide each task with its required system resources, the Linux kernel needs a tool to distribute available system resources to individual tasks. And this is exactly what the task scheduler does.
The following sections explain the most important terms related to a process scheduling. They also introduce information about the task scheduler policy, scheduling algorithm, description of the task scheduler used by openSUSE Leap, and references to other sources of relevant information.
The Linux kernel controls the way that tasks (or processes) are managed on the system. The task scheduler, sometimes called process scheduler, is the part of the kernel that decides which task to run next. It is responsible for best using system resources to guarantee that multiple tasks are being executed simultaneously. This makes it a core component of any multitasking operating system.
The theory behind task scheduling is very simple. If there are runnable processes in a system, at least one process must always be running. If there are more runnable processes than processors in a system, not all the processes can be running all the time.
Therefore, some processes need to be stopped temporarily, or suspended, so that others can be running again. The scheduler decides what process in the queue will run next.
As already mentioned, Linux, like all other Unix variants, is a multitasking operating system. That means that several tasks can be running at the same time. Linux provides a so called preemptive multitasking, where the scheduler decides when a process is suspended. This forced suspension is called preemption. All Unix flavors have been providing preemptive multitasking since the beginning.
The time period for which a process will be running before it is preempted is defined in advance. It is called a timeslice of a process and represents the amount of processor time that is provided to each process. By assigning timeslices, the scheduler makes global decisions for the running system, and prevents individual processes from dominating over the processor resources.
The scheduler evaluates processes based on their priority. To calculate the current priority of a process, the task scheduler uses complex algorithms. As a result, each process is given a value according to which it is “allowed” to run on a processor.
Processes are usually classified according to their purpose and behavior. Although the borderline is not always clearly distinct, generally two criteria are used to sort them. These criteria are independent and do not exclude each other.
One approach is to classify a process either I/O-bound or processor-bound.
I/O stands for Input/Output devices, such as keyboards, mice, or optical and hard disks. I/O-bound processes spend the majority of time submitting and waiting for requests. They are run very frequently, but for short time intervals, not to block other processes waiting for I/O requests.
On the other hand, processor-bound tasks use their time to execute a code, and usually run until they are preempted by the scheduler. They do not block processes waiting for I/O requests, and, therefore, can be run less frequently but for longer time intervals.
Another approach is to divide processes by type into interactive, batch, and real-time processes.
Interactive processes spend a lot of time waiting for I/O requests, such as keyboard or mouse operations. The scheduler must wake up such processes quickly on user request, or the user will find the environment unresponsive. The typical delay is approximately 100 ms. Office applications, text editors or image manipulation programs represent typical interactive processes.
Batch processes often run in the background and do not need to be responsive. They usually receive lower priority from the scheduler. Multimedia converters, database search engines, or log files analyzers are typical examples of batch processes.
Real-time processes must never be blocked by low-priority processes, and the scheduler guarantees a short response time to them. Applications for editing multimedia content are a good example here.
Since the Linux kernel version 2.6.23, a new approach has been taken to the scheduling of runnable processes. Completely Fair Scheduler (CFS) became the default Linux kernel scheduler. Since then, important changes and improvements have been made. The information in this chapter applies to openSUSE Leap with kernel version 2.6.32 and higher (including 3.x kernels). The scheduler environment was divided into several parts, and three main new features were introduced:
The core of the scheduler was enhanced with scheduling classes. These classes are modular and represent scheduling policies.
Introduced in kernel 2.6.23 and extended in 2.6.24, CFS tries to assure that each process obtains its “fair” share of the processor time.
For example, if you split processes into groups according to which user is running them, CFS tries to provide each of these groups with the same amount of processor time.
As a result, CFS brings optimized scheduling for both servers and desktops.
CFS tries to guarantee a fair approach to each runnable task. To find the most balanced way of task scheduling, it uses the concept of red-black tree. A red-black tree is a type of self-balancing data search tree which provides inserting and removing entries in a reasonable way so that it remains well balanced. For more information, see the wiki pages of Red-black tree.
When CFS schedules a task it accumulates “virtual runtime” or vruntime. The next task picked to run is always the task with the minimum accumulated vruntime so far. By balancing the red-black tree when tasks are inserted into the run queue (a planned time line of processes to be executed next), the task with the minimum vruntime is always the first entry in the red-black tree.
The amount of vruntime a task accrues is related to its priority. High priority tasks gain vruntime at a slower rate than low priority tasks, which results in high priority tasks being picked to run on the processor more often.
Since the Linux kernel version 2.6.24, CFS can be tuned to be fair to groups rather than to tasks only. Runnable tasks are then grouped to form entities, and CFS tries to be fair to these entities instead of individual runnable tasks. The scheduler also tries to be fair to individual tasks within these entities.
The kernel scheduler lets you group runnable tasks using control groups. For more information, see Chapter 9, Kernel Control Groups.
Basic aspects of the task scheduler behavior can be set through the kernel configuration options. Setting these options is part of the kernel compilation process. Because kernel compilation process is a complex task and out of this document's scope, refer to relevant source of information.
If you run openSUSE Leap on a kernel that was not shipped with it, for example on a self-compiled kernel, you lose the entire support entitlement.
Documents regarding task scheduling policy often use several technical terms which you need to know to understand the information correctly. Here are some:
Delay between the time a process is scheduled to run and the actual process execution.
The relation between granularity and latency can be expressed by the following equation:
gran = ( lat / rtasks ) - ( lat / rtasks / rtasks )
where gran stands for granularity, lat stand for latency, and rtasks is the number of running tasks.
The Linux kernel supports the following scheduling policies:
Scheduling policy designed for special time-critical applications. It uses the First In-First Out scheduling algorithm.
Scheduling policy designed for CPU-intensive tasks.
Scheduling policy intended for very low prioritized tasks.
Default Linux time-sharing scheduling policy used by the majority of processes.
Similar to SCHED_FIFO, but uses the Round
Robin scheduling algorithm.
chrt #
The chrt command sets or retrieves the real-time
scheduling attributes of a running process, or runs a command with the
specified attributes. You can get or retrieve both the scheduling policy
and priority of a process.
In the following examples, a process whose PID is 16244 is used.
To retrieve the real-time attributes of an existing task:
root # chrt -p 16244
pid 16244's current scheduling policy: SCHED_OTHER
pid 16244's current scheduling priority: 0Before setting a new scheduling policy on the process, you need to find out the minimum and maximum valid priorities for each scheduling algorithm:
root # chrt -m
SCHED_SCHED_OTHER min/max priority : 0/0
SCHED_SCHED_FIFO min/max priority : 1/99
SCHED_SCHED_RR min/max priority : 1/99
SCHED_SCHED_BATCH min/max priority : 0/0
SCHED_SCHED_IDLE min/max priority : 0/0In the above example, SCHED_OTHER, SCHED_BATCH, SCHED_IDLE polices only allow for priority 0, while that of SCHED_FIFO and SCHED_RR can range from 1 to 99.
To set SCHED_BATCH scheduling policy:
root # chrt -b -p 0 16244
pid 16244's current scheduling policy: SCHED_BATCH
pid 16244's current scheduling priority: 0
For more information on chrt, see its man page
(man 1 chrt).
sysctl #
The sysctl interface for examining and changing
kernel parameters at runtime introduces important variables by means of
which you can change the default behavior of the task scheduler. The
syntax of the sysctl is simple, and all the following
commands must be entered on the command line as root.
To read a value from a kernel variable, enter
root #sysctlVARIABLE
To assign a value, enter
root #sysctlVARIABLE=VALUE
To get a list of all scheduler related sysctl
variables, enter
root #sysctl-A|grep"sched" |grep-v"domain"
root # sysctl -A | grep "sched" | grep -v "domain"
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.sched_child_runs_first = 0
kernel.sched_compat_yield = 0
kernel.sched_latency_ns = 24000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 8000000
kernel.sched_nr_migrate = 32
kernel.sched_rr_timeslice_ms = 25
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_schedstats = 0
kernel.sched_shares_window_ns = 10000000
kernel.sched_time_avg_ms = 1000
kernel.sched_tunable_scaling = 1
kernel.sched_wakeup_granularity_ns = 10000000Note that variables ending with “_ns” and “_us” accept values in nanoseconds and microseconds, respectively.
A list of the most important task scheduler sysctl
tuning variables (located at /proc/sys/kernel/)
with a short description follows:
sched_cfs_bandwidth_slice_us
When CFS bandwidth control is in use, this parameter controls the amount of run-time (bandwidth) transferred to a run queue from the task's control group bandwidth pool. Small values allow the global bandwidth to be shared in a fine-grained manner among tasks, larger values reduce transfer overhead. See https://www.kernel.org/doc/Documentation/scheduler/sched-bwc.txt.
sched_child_runs_first
A freshly forked child runs before the parent continues execution.
Setting this parameter to 1 is beneficial for an
application in which the child performs an execution after fork. For
example make
-j<NO_CPUS>
performs better when sched_child_runs_first is turned off. The
default value is 0.
sched_compat_yield
Enables the aggressive CPU yielding behavior of the old
O(1) scheduler by moving the relinquishing task to
the end of the runnable queue (right-most position in the red-black
tree). Applications that depend on the sched_yield(2)
syscall behavior may see performance improvements by giving other
processes a chance to run when there are highly contended resources
(such as locks). On the other hand, given that this call occurs in
context switching, misusing the call can hurt the workload. Only use it
when you see a drop in performance. The default value is
0.
sched_migration_cost_ns
Amount of time after the last execution that a task is considered to
be “cache hot” in migration decisions. A
“hot” task is less likely to be migrated to another CPU,
so increasing this variable reduces task migrations. The default value is
500000 (ns).
If the CPU idle time is higher than expected when there are runnable processes, try reducing this value. If tasks bounce between CPUs or nodes too often, try increasing it.
sched_latency_ns
Targeted preemption latency for CPU bound tasks. Increasing this variable increases a CPU bound task's timeslice. A task's timeslice is its weighted fair share of the scheduling period:
timeslice = scheduling period * (task's weight/total weight of tasks in the run queue)
The task's weight depends on the task's nice level and the scheduling policy. Minimum task weight for a SCHED_OTHER task is 15, corresponding to nice 19. The maximum task weight is 88761, corresponding to nice -20.
Timeslices become smaller as the load increases. When the number of
runnable tasks exceeds
sched_latency_ns/sched_min_granularity_ns,
the slice becomes number_of_running_tasks *
sched_min_granularity_ns. Prior to that, the
slice is equal to sched_latency_ns.
This value also specifies the maximum amount of time during which a
sleeping task is considered to be running for entitlement
calculations. Increasing this variable increases the amount of time a
waking task may consume before being preempted, thus increasing
scheduler latency for CPU bound tasks. The default value is
6000000 (ns).
sched_min_granularity_ns
Minimal preemption granularity for CPU bound tasks. See
sched_latency_ns for details. The default
value is 4000000 (ns).
sched_wakeup_granularity_ns
The wake-up preemption granularity. Increasing this variable reduces
wake-up preemption, reducing disturbance of compute bound tasks.
Lowering it improves wake-up latency and throughput for latency
critical tasks, particularly when a short duty cycle load component
must compete with CPU bound components. The default value is
2500000 (ns).
Settings larger than half of
sched_latency_ns will result in no wake-up
preemption. Short duty cycle tasks will be unable to compete with
CPU hogs effectively.
sched_rr_timeslice_ms
Quantum that SCHED_RR tasks are allowed to run before they are preempted and put to the end of the task list.
sched_rt_period_us
Period over which real-time task bandwidth enforcement is measured.
The default value is 1000000 (µs).
sched_rt_runtime_us
Quantum allocated to real-time tasks during sched_rt_period_us.
Setting to -1 disables RT bandwidth enforcement. By default, RT tasks
may consume 95%CPU/sec, thus leaving 5%CPU/sec or 0.05s to be used by
SCHED_OTHER tasks. The default value is 950000
(µs).
sched_nr_migrate
Controls how many tasks can be migrated across processors for
load-balancing purposes. Because balancing iterates the runqueue
with interrupts disabled (softirq), it can incur in irq-latency
penalties for real-time tasks. Therefore increasing this value
may give a performance boost to large SCHED_OTHER threads at the
expense of increased irq-latencies for real-time tasks. The default
value is 32.
sched_time_avg_ms
This parameter sets the period over which the time spent running real-time tasks is averaged. That average assists CFS in making load-balancing decisions and gives an indication of how busy a CPU is with high-priority real-time tasks.
The optimal setting for this parameter is highly workload dependent and depends, among other things, on how frequently real-time tasks are running and for how long.
CFS comes with a new improved debugging interface, and provides runtime
statistics information. Relevant files were added to the
/proc file system, which can be examined simply
with the cat or less command. A
list of the related /proc files follows with their
short description:
/proc/sched_debug
Contains the current values of all tunable variables (see Section 13.3.6, “Runtime Tuning with sysctl”) that affect the task
scheduler behavior, CFS statistics, and information about the run queues
(CFS, RT and deadline) on all available processors. A summary of the
task running on each processor is also shown, with the task name and
PID, along with scheduler specific statistics. The first
being tree-key column, it indicates the task's virtual
runtime, and its name comes from the kernel sorting all runnable tasks
by this key in a red-black tree. The switches column
indicates the total number of switches (involuntary or not), and
naturally the prio refers to the process priority. The
wait-time value indicates the amount of time the task
waited to be scheduled. Finally both sum-exec and
sum-sleep account for the total amount of time (in
nanoseconds) the task was running on the processor or asleep,
respectively.
root # cat /proc/sched_debug
Sched Debug Version: v0.11, 4.4.21-64-default #1
ktime : 23533900.395978
sched_clk : 23543587.726648
cpu_clk : 23533900.396165
jiffies : 4300775771
sched_clock_stable : 0
sysctl_sched
.sysctl_sched_latency : 6.000000
.sysctl_sched_min_granularity : 2.000000
.sysctl_sched_wakeup_granularity : 2.500000
.sysctl_sched_child_runs_first : 0
.sysctl_sched_features : 154871
.sysctl_sched_tunable_scaling : 1 (logaritmic)
cpu#0, 2666.762 MHz
.nr_running : 1
.load : 1024
.nr_switches : 1918946
[...]
cfs_rq[0]:/
.exec_clock : 170176.383770
.MIN_vruntime : 0.000001
.min_vruntime : 347375.854324
.max_vruntime : 0.000001
[...]
rt_rq[0]:/
.rt_nr_running : 0
.rt_throttled : 0
.rt_time : 0.000000
.rt_runtime : 950.000000
dl_rq[0]:
.dl_nr_running : 0
task PID tree-key switches prio wait-time [...]
------------------------------------------------------------------------
R cc1 63477 98876.717832 197 120 0.000000 .../proc/schedstat
Displays statistics relevant to the current run queue. Also
domain-specific statistics for SMP systems are displayed for all
connected processors. Because the output format is not user-friendly,
read the contents of
/usr/src/linux/Documentation/scheduler/sched-stats.txt
for more information.
/proc/PID/sched
Displays scheduling information on the process with id PID.
root # cat /proc/$(pidof gdm)/sched
gdm (744, #threads: 3)
-------------------------------------------------------------------
se.exec_start : 8888.758381
se.vruntime : 6062.853815
se.sum_exec_runtime : 7.836043
se.statistics.wait_start : 0.000000
se.statistics.sleep_start : 8888.758381
se.statistics.block_start : 0.000000
se.statistics.sleep_max : 1965.987638
[...]
se.avg.decay_count : 8477
policy : 0
prio : 120
clock-delta : 128
mm->numa_scan_seq : 0
numa_migrations, 0
numa_faults_memory, 0, 0, 1, 0, -1
numa_faults_memory, 1, 0, 0, 0, -1To get a compact knowledge about Linux kernel task scheduling, you need to explore several information sources. Here are some:
For task scheduler System Calls description, see the relevant manual
page (for example man 2 sched_setaffinity).
General information on scheduling is described in Scheduling wiki page.
A useful lecture on Linux scheduler policy and algorithm is available in http://www.inf.fu-berlin.de/lehre/SS01/OS/Lectures/Lecture08.pdf.
A good overview of Linux process scheduling is given in Linux Kernel Development by Robert Love (ISBN-10: 0-672-32512-8). See http://www.informit.com/articles/article.aspx?p=101760.
A very comprehensive overview of the Linux kernel internals is given in Understanding the Linux Kernel by Daniel P. Bovet and Marco Cesati (ISBN 978-0-596-00565-8).
Technical information about task scheduler is covered in files under
/usr/src/linux/Documentation/scheduler.
To understand and tune the memory management behavior of the kernel, it is important to first have an overview of how it works and cooperates with other subsystems.
The memory management subsystem, also called the virtual memory manager, will subsequently be called “VM”. The role of the VM is to manage the allocation of physical memory (RAM) for the entire kernel and user programs. It is also responsible for providing a virtual memory environment for user processes (managed via POSIX APIs with Linux extensions). Finally, the VM is responsible for freeing up RAM when there is a shortage, either by trimming caches or swapping out “anonymous” memory.
The most important thing to understand when examining and tuning VM is how its caches are managed. The basic goal of the VM's caches is to minimize the cost of I/O as generated by swapping and file system operations (including network file systems). This is achieved by avoiding I/O completely, or by submitting I/O in better patterns.
Free memory will be used and filled up by these caches as required. The more memory is available for caches and anonymous memory, the more effectively caches and swapping will operate. However, if a memory shortage is encountered, caches will be trimmed or memory will be swapped out.
For a particular workload, the first thing that can be done to improve performance is to increase memory and reduce the frequency that memory must be trimmed or swapped. The second thing is to change the way caches are managed by changing kernel parameters.
Finally, the workload itself should be examined and tuned as well. If an application is allowed to run more processes or threads, effectiveness of VM caches can be reduced, if each process is operating in its own area of the file system. Memory overheads are also increased. If applications allocate their own buffers or caches, larger caches will mean that less memory is available for VM caches. However, more processes and threads can mean more opportunity to overlap and pipeline I/O, and may take better advantage of multiple cores. Experimentation will be required for the best results.
Memory allocations in general can be characterized as “pinned” (also known as “unreclaimable”), “reclaimable” or “swappable”.
Anonymous memory tends to be program heap and stack memory (for example,
>malloc()). It is reclaimable, except in special
cases such as mlock or if there is no available swap
space. Anonymous memory must be written to swap before it can be
reclaimed. Swap I/O (both swapping in and swapping out pages) tends to
be less efficient than pagecache I/O, because of allocation and access
patterns.
A cache of file data. When a file is read from disk or network, the contents are stored in pagecache. No disk or network access is required, if the contents are up-to-date in pagecache. tmpfs and shared memory segments count toward pagecache.
When a file is written to, the new data is stored in pagecache before being written back to a disk or the network (making it a write-back cache). When a page has new data not written back yet, it is called “dirty”. Pages not classified as dirty are “clean”. Clean pagecache pages can be reclaimed if there is a memory shortage by simply freeing them. Dirty pages must first be made clean before being reclaimed.
This is a type of pagecache for block devices (for example, /dev/sda). A file system typically uses the buffercache when accessing its on-disk metadata structures such as inode tables, allocation bitmaps, and so forth. Buffercache can be reclaimed similarly to pagecache.
Buffer heads are small auxiliary structures that tend to be allocated upon pagecache access. They can generally be reclaimed easily when the pagecache or buffercache pages are clean.
As applications write to files, the pagecache becomes dirty and the buffercache may become dirty. When the amount of dirty memory reaches a specified number of pages in bytes (vm.dirty_background_bytes), or when the amount of dirty memory reaches a specific ratio to total memory (vm.dirty_background_ratio), or when the pages have been dirty for longer than a specified amount of time (vm.dirty_expire_centisecs), the kernel begins writeback of pages starting with files that had the pages dirtied first. The background bytes and ratios are mutually exclusive and setting one will overwrite the other. Flusher threads perform writeback in the background and allow applications to continue running. If the I/O cannot keep up with applications dirtying pagecache, and dirty data reaches a critical setting (vm.dirty_bytes or vm.dirty_ratio), then applications begin to be throttled to prevent dirty data exceeding this threshold.
The VM monitors file access patterns and may attempt to perform readahead. Readahead reads pages into the pagecache from the file system that have not been requested yet. It is done to allow fewer, larger I/O requests to be submitted (more efficient). And for I/O to be pipelined (I/O performed at the same time as the application is running).
This is an in-memory cache of the inode structures for each file system. These contain attributes such as the file size, permissions and ownership, and pointers to the file data.
This is an in-memory cache of the directory entries in the system. These contain a name (the name of a file), the inode which it refers to, and children entries. This cache is used when traversing the directory structure and accessing a file by name.
Applications running on openSUSE Leap 42.3 can allocate
more memory compared to openSUSE Leap 10. This is because of
glibc changing its default
behavior while allocating user space memory. See
http://www.gnu.org/s/libc/manual/html_node/Malloc-Tunable-Parameters.html
for explanation of these parameters.
To restore a openSUSE Leap 10-like behavior, M_MMAP_THRESHOLD should be set to 128*1024. This can be done with mallopt() call from the application, or via setting MALLOC_MMAP_THRESHOLD environment variable before running the application.
Kernel memory that is reclaimable (caches, described above) will be trimmed automatically during memory shortages. Most other kernel memory cannot be easily reduced but is a property of the workload given to the kernel.
Reducing the requirements of the user space workload will reduce the kernel memory usage (fewer processes, fewer open files and sockets, etc.)
If the memory cgroups feature is not needed, it can be switched off by passing cgroup_disable=memory on the kernel command line, reducing memory consumption of the kernel a bit. There is also a slight performance benefit as there is a small amount of accounting overhead when memory cgroups are available even if none are configured.
When tuning the VM it should be understood that some changes will take time to affect the workload and take full effect. If the workload changes throughout the day, it may behave very differently at different times. A change that increases throughput under some conditions may decrease it under other conditions.
/proc/sys/vm/swappiness
This control is used to define how aggressively the kernel swaps out
anonymous memory relative to pagecache and other caches. Increasing
the value increases the amount of swapping. The default value is
60.
Swap I/O tends to be much less efficient than other I/O. However, some pagecache pages will be accessed much more frequently than less used anonymous memory. The right balance should be found here.
If swap activity is observed during slowdowns, it may be worth reducing this parameter. If there is a lot of I/O activity and the amount of pagecache in the system is rather small, or if there are large dormant applications running, increasing this value might improve performance.
Note that the more data is swapped out, the longer the system will take to swap data back in when it is needed.
/proc/sys/vm/vfs_cache_pressure
This variable controls the tendency of the kernel to reclaim the memory which is used for caching of VFS caches, versus pagecache and swap. Increasing this value increases the rate at which VFS caches are reclaimed.
It is difficult to know when this should be changed, other than by
experimentation. The slabtop command (part of the
package procps) shows top
memory objects used by the kernel. The vfs caches are the "dentry"
and the "*_inode_cache" objects. If these are consuming a large
amount of memory in relation to pagecache, it may be worth trying to
increase pressure. Could also help to reduce swapping. The default
value is 100.
/proc/sys/vm/min_free_kbytes
This controls the amount of memory that is kept free for use by special reserves including “atomic” allocations (those which cannot wait for reclaim). This should not normally be lowered unless the system is being very carefully tuned for memory usage (normally useful for embedded rather than server applications). If “page allocation failure” messages and stack traces are frequently seen in logs, min_free_kbytes could be increased until the errors disappear. There is no need for concern, if these messages are very infrequent. The default value depends on the amount of RAM.
/proc/sys/vm/watermark_scale_factor
Broadly speaking, free memory has high, low and min watermarks. When
the low watermark is reached then kswapd wakes to
reclaim memory in the background. It stays awake until free memory
reaches the high watermark. Applications will stall and reclaim
memory when the low watermark is reached.
The watermark_scale_factor defines the amount
of memory left in a node/system before kswapd is woken up and how
much memory needs to be free before kswapd goes back to sleep.
The unit is in fractions of 10,000. The default value of 10 means
the distances between watermarks are 0.1% of the available memory
in the node/system. The maximum value is 1000, or 10% of memory.
Workloads that frequently stall in direct reclaim, accounted by
allocstall in /proc/vmstat,
may benefit from altering this parameter. Similarly, if
kswapd is sleeping prematurely, as accounted for by
kswapd_low_wmark_hit_quickly, then it may indicate
that the number of pages kept free to avoid stalls is too low.
One important change in writeback behavior since openSUSE Leap 10 is that modification to file-backed mmap() memory is accounted immediately as dirty memory (and subject to writeback). Whereas previously it would only be subject to writeback after it was unmapped, upon an msync() system call, or under heavy memory pressure.
Some applications do not expect mmap modifications to be subject to such writeback behavior, and performance can be reduced. Berkeley DB (and applications using it) is one known example that can cause problems. Increasing writeback ratios and times can improve this type of slowdown.
/proc/sys/vm/dirty_background_ratio
This is the percentage of the total amount of free and reclaimable
memory. When the amount of dirty pagecache exceeds this percentage,
writeback threads start writing back dirty memory. The default value
is 10 (%).
/proc/sys/vm/dirty_background_bytes
This contains the amount of dirty memory at which
the background kernel flusher threads will start writeback.
dirty_background_bytes is the counterpart of
dirty_background_ratio. If one of them is set,
the other one will automatically be read as 0.
/proc/sys/vm/dirty_ratio
Similar percentage value as for
dirty_background_ratio. When this is exceeded,
applications that want to write to the pagecache are blocked and
wait for kernel background flusher threads to reduce the amount of dirty
memory. The default value is 20 (%).
/proc/sys/vm/dirty_bytes
This file controls the same tunable as dirty_ratio
however the amount of dirty memory is in bytes as opposed to a
percentage of reclaimable memory. Since both
dirty_ratio and dirty_bytes
control the same tunable, if one of them is set, the other one will
automatically be read as 0. The minimum value allowed
for dirty_bytes is two pages (in bytes); any value
lower than this limit will be ignored and the old configuration will be
retained.
/proc/sys/vm/dirty_expire_centisecs
Data which has been dirty in-memory for longer than this interval will be written out next time a flusher thread wakes up. Expiration is measured based on the modification time of a file's inode. Therefore, multiple dirtied pages from the same file will all be written when the interval is exceeded.
dirty_background_ratio and
dirty_ratio together determine the pagecache
writeback behavior. If these values are increased, more dirty memory is
kept in the system for a longer time. With more dirty memory allowed in
the system, the chance to improve throughput by avoiding writeback I/O
and to submitting more optimal I/O patterns increases. However, more
dirty memory can either harm latency when memory needs to be reclaimed
or at points of data integrity (“synchronization points”) when it
needs to be written back to disk.
/sys/block/<bdev>/queue/read_ahead_kb
If one or more processes are sequentially reading a file, the kernel
reads some data in advance (ahead) to reduce the amount of
time that processes need to wait for data to be available. The actual
amount of data being read in advance is computed dynamically, based
on how much "sequential" the I/O seems to be. This parameter sets the
maximum amount of data that the kernel reads ahead for a single file.
If you observe that large sequential reads from a file are not fast
enough, you can try increasing this value. Increasing it too far may
result in readahead thrashing where pagecache used for readahead is
reclaimed before it can be used, or slowdowns because of a large
amount of useless I/O. The default value is 512
(KB).
Transparent Huge Pages (THP) provide a way to dynamically allocate huge
pages either on‑demand by the process or deferring the allocation
until later via the khugepaged kernel thread. This
method is distinct from the use of hugetlbfs to
manually manage their allocation and use. Workloads with contiguous memory
access patterns can benefit greatly from THP. A 1000-fold decrease in page
faults can be observed when running synthetic workloads with contiguous
memory access patterns.
There are cases when THP may be undesirable. Workloads with sparse memory access patterns can perform poorly with THP due to excessive memory usage. For example, 2 MB of memory may be used at fault time instead of 4 KB for each fault and ultimately lead to premature page reclaim. On releases older than openSUSE Leap 42.2, it was possible for an application to stall for long periods of time trying to allocate a THP which frequently led to a recommendation of disabling THP. Such recommendations should be re-evaluated for openSUSE Leap 42.3
The behavior of THP may be configured via the
transparent_hugepage= kernel parameter or via
sysfs. For example, it may be disabled by adding the kernel parameter
transparent_hugepage=never, rebuilding your grub2
configuration, and rebooting. Verify if THP is disabled with:
root # cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
If disabled, the value never is shown
in square brackets like in the example above. A value of
always will always try and use THP at fault
time but defer to khugepaged if the allocation
fails. A value of madvise will only allocate THP
for address spaces explicitly specified by an application.
/sys/kernel/mm/transparent_hugepage/defrag
This parameter controls how much effort an application commits when
allocating a THP. A value of always is the default
for openSUSE 42.1 and earlier releases
that supported THP. If a THP is not available, the application will
try to defragment memory. It potentially incurs large stalls in an
application if the memory is fragmented and a THP is not available.
A value of madvise means that THP allocation
requests will only defragment if the application explicitly requests
it. This is the default for openSUSE 42.2 and later
releases.
defer is only available on openSUSE
42.2 and later releases. If a THP is not available, the
application will fall back to using small pages if a THP is not
available. It will wake the kswapd and
kcompactd kernel threads to defragment memory in
the background and a THP will be allocated later by
khugepaged.
The final option never will use small pages if
a THP is unavailable but no other action will take place.
khugepaged will be automatically started when
transparent_hugepage is set to
always or madvise, and it will be
automatically shut down if it is set to never. Normally
this runs at low frequency but the behavior can be tuned.
/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
A value of 0 will disable khugepaged even though
THP may still be used at fault time. This may be important for
latency-sensitive applications that benefit from THP but cannot
tolerate a stall if khugepaged tries to update an
application memory usage.
/sys/kernel/mm/transparent_hugepage/khugepaged/pages_to_scan
This parameter controls how many pages are scanned by
khugepaged in a single pass. A scan identifies
small pages that can be reallocated as THP. Increasing this value
will allocate THP in the background faster at the cost of CPU
usage.
/sys/kernel/mm/transparent_hugepage/khugepaged/scan_sleep_millisecs
khugepaged sleeps for a short interval specified
by this parameter after each pass to limit how much CPU usage is
used. Reducing this value will allocate THP in the background faster
at the cost of CPU usage. A value of 0 will force continual scanning.
/sys/kernel/mm/transparent_hugepage/khugepaged/alloc_sleep_millisecs
This parameter controls how long khugepaged will
sleep in the event it fails to allocate a THP in the background waiting
for kswapd and kcompactd to
take action.
The remaining parameters for khugepaged are rarely
useful for performance tuning but are fully documented in
/usr/src/linux/Documentation/vm/transhuge.txt
For the complete list of the VM tunable parameters, see
/usr/src/linux/Documentation/sysctl/vm.txt
(available after having installed the
kernel-source package).
Some simple tools that can help monitor VM behavior:
vmstat: This tool gives a good overview of what the VM is doing. See
Section 2.1.1, “vmstat” for details.
/proc/meminfo: This file gives a detailed
breakdown of where memory is being used. See
Section 2.4.2, “Detailed Memory Usage: /proc/meminfo” for details.
slabtop: This tool provides detailed information
about kernel slab memory usage. buffer_head, dentry, inode_cache,
ext3_inode_cache, etc. are the major caches. This command is available
with the package procps.
/proc/vmstat: This file gives a detailed
breakdown of internal VM behavior. The information contained within
is implementation specific and may not always be available. Some
information is duplicated in /proc/meminfo
and other can be presented in a friendly fashion by utilities. For
maximum utility, this file needs to be monitored over time to observe
rates of change. The most important pieces of information that are
hard to derive from other sources are as follows:
pgscan_kswapd_*, pgsteal_kswapd_*
These report respectively the number of pages scanned and reclaimed
by kswapd since the system started. The ratio
between these values can be interpreted as the reclaim efficiency
with a low efficiency implying that the system is struggling to
reclaim memory and may be thrashing. Light activity here is
generally not something to be concerned with.
pgscan_direct_*, pgsteal_direct_*
These report respectively the number of pages scanned and
reclaimed by an application directly. This is correlated with
increases in the allocstall counter. This is
more serious than kswapd activity as these
events indicate that processes are stalling. Heavy activity
here combined with kswapd and high rates of
pgpgin, pgpout and/or high
rates of pswapin or pswpout
are signs that a system is thrashing heavily.
More detailed information can be obtained using tracepoints.
thp_fault_alloc, thp_fault_fallbackThese counters correspond to how many THPs were allocated directly by an application and how many times a THP was not available and small pages were used. Generally a high fallback rate is harmless unless the application is very sensitive to TLB pressure.
thp_collapse_alloc, thp_collapse_alloc_failed
These counters correspond to how many THPs were allocated by
khugepaged and how many times a THP was not
available and small pages were used. A high fallback rate implies
that the system is fragmented and THPs are not being used even
when the memory usage by applications would allow them. It is
only a problem for applications that are sensitive to TLB pressure.
compact_*_scanned, compact_stall, compact_fail,
compact_success
These counters may increase when THP is enabled and the system is
fragmented. compact_stall is incremented when
an application stalls allocating THP. The remaining counters
account for pages scanned, the number of defragmentation events
that succeeded or failed.
The network subsystem is complex and its tuning highly depends on the system use scenario and on external factors such as software clients or hardware components (switches, routers, or gateways) in your network. The Linux kernel aims more at reliability and low latency than low overhead and high throughput. Other settings can mean less security, but better performance.
Networking is largely based on the TCP/IP protocol and a socket interface for communication; for more information about TCP/IP, see Chapter 13, Basic Networking. The Linux kernel handles data it receives or sends via the socket interface in socket buffers. These kernel socket buffers are tunable.
Since kernel version 2.6.17 full autotuning with 4 MB maximum buffer size exists. This means that manual tuning usually will not improve networking performance considerably. It is often the best not to touch the following variables, or, at least, to check the outcome of tuning efforts carefully.
If you update from an older kernel, it is recommended to remove manual TCP tunings in favor of the autotuning feature.
The special files in the /proc file system can
modify the size and behavior of kernel socket buffers; for general
information about the /proc file system, see
Section 2.6, “The /proc File System”. Find networking related files in:
/proc/sys/net/core /proc/sys/net/ipv4 /proc/sys/net/ipv6
General net variables are explained in the
kernel documentation
(linux/Documentation/sysctl/net.txt). Special
ipv4 variables are explained in
linux/Documentation/networking/ip-sysctl.txt and
linux/Documentation/networking/ipvs-sysctl.txt.
In the /proc file system, for example, it is
possible to either set the Maximum Socket Receive Buffer and Maximum
Socket Send Buffer for all protocols, or both these options for the TCP
protocol only (in ipv4) and thus overriding the
setting for all protocols (in core).
/proc/sys/net/ipv4/tcp_moderate_rcvbuf
If /proc/sys/net/ipv4/tcp_moderate_rcvbuf is set
to 1, autotuning is active and buffer size is
adjusted dynamically.
/proc/sys/net/ipv4/tcp_rmem
The three values setting the minimum, initial, and maximum size of the Memory Receive Buffer per connection. They define the actual memory usage, not only TCP window size.
/proc/sys/net/ipv4/tcp_wmem
The same as tcp_rmem, but for Memory Send Buffer
per connection.
/proc/sys/net/core/rmem_max
Set to limit the maximum receive buffer size that applications can request.
/proc/sys/net/core/wmem_max
Set to limit the maximum send buffer size that applications can request.
Via /proc it is possible to disable TCP features
that you do not need (all TCP features are switched on by default). For
example, check the following files:
/proc/sys/net/ipv4/tcp_timestamps
TCP time stamps are defined in RFC1323.
/proc/sys/net/ipv4/tcp_window_scaling
TCP window scaling is also defined in RFC1323.
/proc/sys/net/ipv4/tcp_sack
Select acknowledgments (SACKS).
Use sysctl to read or write variables of the
/proc file system. sysctl is
preferable to cat (for reading) and
echo (for writing), because it also reads settings
from /etc/sysctl.conf and, thus, those settings
survive reboots reliably. With sysctl you can read all
variables and their values easily; as root use the following
command to list TCP related settings:
tux >sudosysctl -a | grep tcp
Tuning network variables can affect other system resources such as CPU or memory use.
Before starting with network tuning, it is important to isolate network bottlenecks and network traffic patterns. There are some tools that can help you with detecting those bottlenecks.
The following tools can help analyzing your network traffic:
netstat, tcpdump, and
wireshark. Wireshark is a network traffic analyzer.
The Linux firewall and masquerading features are provided by the Netfilter kernel modules. This is a highly configurable rule based framework. If a rule matches a packet, Netfilter accepts or denies it or takes special action (“target”) as defined by rules such as address translation.
There are quite a lot of properties Netfilter can take into account. Thus, the more rules are defined, the longer packet processing may last. Also advanced connection tracking could be rather expensive and, thus, slowing down overall networking.
When the kernel queue becomes full, all new packets are dropped, causing existing connections to fail. The 'fail-open' feature allows a user to temporarily disable the packet inspection and maintain the connectivity under heavy network traffic. For reference, see https://home.regit.org/netfilter-en/using-nfqueue-and-libnetfilter_queue/.
For more information, see the home page of the Netfilter and iptables project, http://www.netfilter.org
Modern network interface devices can move so many packets that the host can become the limiting factor for achieving maximum performance. To keep up, the system must be able to distribute the work across multiple CPU cores.
Some modern network interfaces can help distribute the work to multiple CPU cores through the implementation of multiple transmission and multiple receive queues in hardware. However, others are only equipped with a single queue and the driver must deal with all incoming packets in a single, serialized stream. To work around this issue, the operating system must "parallelize" the stream to distribute the work across multiple CPUs. On openSUSE Leap this is done via Receive Packet Steering (RPS). RPS can also be used in virtual environments.
RPS creates a unique hash for each data stream using IP addresses and port numbers. The use of this hash ensures that packets for the same data stream are sent to the same CPU, which helps to increase performance.
RPS is configured per network device receive queue and interface. The configuration file names match the following scheme:
/sys/class/net/<device>/queues/<rx-queue>/rps_cpus
<device> stands for the network
device, such as eth0, eth1.
<rx-queue> stands for the receive queue,
such as rx-0, rx-1.
If the network interface hardware only supports a single receive queue,
only rx-0 will exist. If it supports multiple receive
queues, there will be an rx-N directory for
each receive queue.
These configuration files contain a comma-delimited list of CPU bitmaps.
By default, all bits are set to 0. With this setting
RPS is disabled and therefore the CPU that handles the interrupt will
also process the packet queue.
To enable RPS and enable specific CPUs to process packets for the receive
queue of the interface, set the value of their positions in the bitmap to
1. For example, to enable CPUs 0-3 to process packets
for the first receive queue for eth0, set the bit positions
0-3 to 1 in binary: 00001111. This representation then
needs to be converted to hex—which results in F in
this case. Set this hex value with the following command:
tux >sudoecho "f" > /sys/class/net/eth0/queues/rx-0/rps_cpus
If you wanted to enable CPUs 8-15:
1111 1111 0000 0000 (binary) 15 15 0 0 (decimal) F F 0 0 (hex)
The command to set the hex value of ff00 would be:
tux >sudoecho "ff00" > /sys/class/net/eth0/queues/rx-0/rps_cpus
On NUMA machines, best performance can be achieved by configuring RPS to use the CPUs on the same NUMA node as the interrupt for the interface's receive queue.
On non-NUMA machines, all CPUs can be used. If the interrupt rate is very
high, excluding the CPU handling the network interface can boost
performance. The CPU being used for the network interface can be
determined from /proc/interrupts. For example:
tux >sudocat /proc/interrupts CPU0 CPU1 CPU2 CPU3 ... 51: 113915241 0 0 0 Phys-fasteoi eth0 ...
In this case, CPU 0 is the only CPU processing
interrupts for eth0, since only
CPU0 contains a non-zero value.
On x86 and AMD64/Intel 64 platforms, irqbalance can be used
to distribute hardware interrupts across CPUs. See man 1
irqbalance for more details.
Eduardo Ciliendo, Takechika Kunimasa: “Linux Performance and Tuning Guidelines” (2007), esp. sections 1.5, 3.5, and 4.7: http://www.redbooks.ibm.com/redpapers/abstracts/redp4285.html
John Heffner, Matt Mathis: “Tuning TCP for Linux 2.4 and 2.6” (2006): http://www.psc.edu/networking/projects/tcptune/#Linux
openSUSE Leap comes with several tools that help you obtain useful information about your system. You can use the information for various purposes, for example, to debug and find problems in your program, to discover places causing performance drops, or to trace a running process to find out what sy…
Kexec is a tool to boot to another kernel from the currently running one. You can perform faster system reboots without any hardware initialization. You can also prepare the system to boot to another kernel if the system crashes.
openSUSE Leap comes with several tools that help you obtain useful information about your system. You can use the information for various purposes, for example, to debug and find problems in your program, to discover places causing performance drops, or to trace a running process to find out what system resources it uses.
While a running process is being monitored for system or library calls, the performance of the process is heavily reduced. You are advised to use tracing tools only for the time you need to collect the data.
The strace command traces system calls of a process
and signals received by the process. strace can either
run a new command and trace its system calls, or you can attach
strace to an already running command. Each line of the
command's output contains the system call name, followed by its arguments
in parentheses and its return value.
To run a new command and start tracing its system calls, enter the
command to be monitored as you normally do, and add
strace at the beginning of the command line:
tux > strace ls
execve("/bin/ls", ["ls"], [/* 52 vars */]) = 0
brk(0) = 0x618000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7f9848667000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7f9848666000
access("/etc/ld.so.preload", R_OK) = -1 ENOENT \
(No such file or directory)
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=200411, ...}) = 0
mmap(NULL, 200411, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f9848635000
close(3) = 0
open("/lib64/librt.so.1", O_RDONLY) = 3
[...]
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7fd780f79000
write(1, "Desktop\nDocuments\nbin\ninst-sys\n", 31Desktop
Documents
bin
inst-sys
) = 31
close(1) = 0
munmap(0x7fd780f79000, 4096) = 0
close(2) = 0
exit_group(0) = ?
To attach strace to an already running process, you
need to specify the -p with the process ID
(PID) of the process that you want to monitor:
tux > strace -p `pidof cron`
Process 1261 attached
restart_syscall(<... resuming interrupted call ...>) = 0
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2309, ...}) = 0
select(5, [4], NULL, NULL, {0, 0}) = 0 (Timeout)
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5
connect(5, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 110) = 0
sendto(5, "\2\0\0\0\0\0\0\0\5\0\0\0root\0", 17, MSG_NOSIGNAL, NULL, 0) = 17
poll([{fd=5, events=POLLIN|POLLERR|POLLHUP}], 1, 5000) = 1 ([{fd=5, revents=POLLIN|POLLHUP}])
read(5, "\2\0\0\0\1\0\0\0\5\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\6\0\0\0"..., 36) = 36
read(5, "root\0x\0root\0/root\0/bin/bash\0", 28) = 28
close(5) = 0
rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0
rt_sigaction(SIGCHLD, NULL, {0x7f772b9ea890, [], SA_RESTORER|SA_RESTART, 0x7f772adf7880}, 8) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
nanosleep({60, 0}, 0x7fff87d8c580) = 0
stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=2309, ...}) = 0
select(5, [4], NULL, NULL, {0, 0}) = 0 (Timeout)
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5
connect(5, {sa_family=AF_LOCAL, sun_path="/var/run/nscd/socket"}, 110) = 0
sendto(5, "\2\0\0\0\0\0\0\0\5\0\0\0root\0", 17, MSG_NOSIGNAL, NULL, 0) = 17
poll([{fd=5, events=POLLIN|POLLERR|POLLHUP}], 1, 5000) = 1 ([{fd=5, revents=POLLIN|POLLHUP}])
read(5, "\2\0\0\0\1\0\0\0\5\0\0\0\2\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\6\0\0\0"..., 36) = 36
read(5, "root\0x\0root\0/root\0/bin/bash\0", 28) = 28
close(5)
[...]
The -e option understands several sub-options and
arguments. For example, to trace all attempts to open or write to a
particular file, use the following:
tux > strace -e trace=open,write ls ~
open("/etc/ld.so.cache", O_RDONLY) = 3
open("/lib64/librt.so.1", O_RDONLY) = 3
open("/lib64/libselinux.so.1", O_RDONLY) = 3
open("/lib64/libacl.so.1", O_RDONLY) = 3
open("/lib64/libc.so.6", O_RDONLY) = 3
open("/lib64/libpthread.so.0", O_RDONLY) = 3
[...]
open("/usr/lib/locale/cs_CZ.utf8/LC_CTYPE", O_RDONLY) = 3
open(".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
write(1, "addressbook.db.bak\nbin\ncxoffice\n"..., 311) = 311
To trace only network related system calls, use -e
trace=network:
tux > strace -e trace=network -p 26520
Process 26520 attached - interrupt to quit
socket(PF_NETLINK, SOCK_RAW, 0) = 50
bind(50, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0
getsockname(50, {sa_family=AF_NETLINK, pid=26520, groups=00000000}, \
[12]) = 0
sendto(50, "\24\0\0\0\26\0\1\3~p\315K\0\0\0\0\0\0\0\0", 20, 0,
{sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 20
[...]
The -c calculates the time the kernel spent on each
system call:
tux > strace -c find /etc -name xorg.conf
/etc/X11/xorg.conf
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
32.38 0.000181 181 1 execve
22.00 0.000123 0 576 getdents64
19.50 0.000109 0 917 31 open
19.14 0.000107 0 888 close
4.11 0.000023 2 10 mprotect
0.00 0.000000 0 1 write
[...]
0.00 0.000000 0 1 getrlimit
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 3 1 futex
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 4 fadvise64
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 0.000559 3633 33 total
To trace all child processes of a process, use -f:
tux > strace -f rcapache2 status
execve("/usr/sbin/rcapache2", ["rcapache2", "status"], [/* 81 vars */]) = 0
brk(0) = 0x69e000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7f3bb553b000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) \
= 0x7f3bb553a000
[...]
[pid 4823] rt_sigprocmask(SIG_SETMASK, [], <unfinished ...>
[pid 4822] close(4 <unfinished ...>
[pid 4823] <... rt_sigprocmask resumed> NULL, 8) = 0
[pid 4822] <... close resumed> ) = 0
[...]
[pid 4825] mprotect(0x7fc42cbbd000, 16384, PROT_READ) = 0
[pid 4825] mprotect(0x60a000, 4096, PROT_READ) = 0
[pid 4825] mprotect(0x7fc42cde4000, 4096, PROT_READ) = 0
[pid 4825] munmap(0x7fc42cda2000, 261953) = 0
[...]
[pid 4830] munmap(0x7fb1fff10000, 261953) = 0
[pid 4830] rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
[pid 4830] open("/dev/tty", O_RDWR|O_NONBLOCK) = 3
[pid 4830] close(3)
[...]
read(255, "\n\n# Inform the caller not only v"..., 8192) = 73
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
exit_group(0)
If you need to analyze the output of strace and the
output messages are too long to be inspected directly in the console
window, use -o. In that case, unnecessary messages, such
as information about attaching and detaching processes, are suppressed.
You can also suppress these messages (normally printed on the standard
output) with -q. To add time stamps at the beginning of each line
with a system call, use -t:
tux > strace -t -o strace_sleep.txt sleep 1; more strace_sleep.txt
08:44:06 execve("/bin/sleep", ["sleep", "1"], [/* 81 vars */]) = 0
08:44:06 brk(0) = 0x606000
08:44:06 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, \
-1, 0) = 0x7f8e78cc5000
[...]
08:44:06 close(3) = 0
08:44:06 nanosleep({1, 0}, NULL) = 0
08:44:07 close(1) = 0
08:44:07 close(2) = 0
08:44:07 exit_group(0) = ?The behavior and output format of strace can be largely controlled. For more information, see the relevant manual page (man 1 strace).
ltrace traces dynamic library calls of a process. It
is used in a similar way to strace, and most of their
parameters have a very similar or identical meaning. By default,
ltrace uses /etc/ltrace.conf or
~/.ltrace.conf configuration files. You can,
however, specify an alternative one with the -F
CONFIG_FILE option.
In addition to library calls, ltrace with the
-S option can trace system calls as well:
tux > ltrace -S -o ltrace_find.txt find /etc -name \
xorg.conf; more ltrace_find.txt
SYS_brk(NULL) = 0x00628000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7f1327ea1000
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7f1327ea0000
[...]
fnmatch("xorg.conf", "xorg.conf", 0) = 0
free(0x0062db80) = <void>
__errno_location() = 0x7f1327e5d698
__ctype_get_mb_cur_max(0x7fff25227af0, 8192, 0x62e020, -1, 0) = 6
__ctype_get_mb_cur_max(0x7fff25227af0, 18, 0x7f1327e5d6f0, 0x7fff25227af0,
0x62e031) = 6
__fprintf_chk(0x7f1327821780, 1, 0x420cf7, 0x7fff25227af0, 0x62e031
<unfinished ...>
SYS_fstat(1, 0x7fff25227230) = 0
SYS_mmap(0, 4096, 3, 34, 0xffffffff) = 0x7f1327e72000
SYS_write(1, "/etc/X11/xorg.conf\n", 19) = 19
[...]
You can change the type of traced events with the -e
option. The following example prints library calls related to
fnmatch and strlen
functions:
tux > ltrace -e fnmatch,strlen find /etc -name xorg.conf
[...]
fnmatch("xorg.conf", "xorg.conf", 0) = 0
strlen("Xresources") = 10
strlen("Xresources") = 10
strlen("Xresources") = 10
fnmatch("xorg.conf", "Xresources", 0) = 1
strlen("xorg.conf.install") = 17
[...]
To display only the symbols included in a specific library, use
-l /path/to/library:
tux > ltrace -l /lib64/librt.so.1 sleep 1
clock_gettime(1, 0x7fff4b5c34d0, 0, 0, 0) = 0
clock_gettime(1, 0x7fff4b5c34c0, 0xffffffffff600180, -1, 0) = 0
+++ exited (status 0) +++
You can make the output more readable by indenting each nested call by
the specified number of space with the -n
NUM_OF_SPACES.
Valgrind is a set of tools to debug and profile your programs so that they can run both faster and with less errors. Valgrind can detect problems related to memory management and threading, or can also serve as a framework for building new debugging tools. It is well known that this tool can incur high overhead, causing, for example, higher runtimes or changing the normal program behavior under concurrent workloads based on timing.
openSUSE Leap supports Valgrind on the following architectures:
AMD64/Intel 64
POWER
z Systems
The main advantage of Valgrind is that it works with existing compiled executables. You do not need to recompile or modify your programs to use it. Run Valgrind like this:
valgrind VALGRIND_OPTIONS
your-prog YOUR-PROGRAM-OPTIONS
Valgrind consists of several tools, and each provides specific
functionality. Information in this section is general and valid
regardless of the used tool. The most important configuration option is
--tool. This option tells Valgrind which tool to run.
If you omit this option, memcheck is selected
by default. For example, to run find ~
-name .bashrc with Valgrind's
memcheck tools, enter the following in the
command line:
valgrind --tool=memcheck find ~ -name
.bashrc
A list of standard Valgrind tools with a brief description follows:
memcheck
Detects memory errors. It helps you tune your programs to behave correctly.
cachegrind
Profiles cache prediction. It helps you tune your programs to run faster.
callgrind
Works in a similar way to cachegrind but
also gathers additional cache-profiling information.
exp-drd
Detects thread errors. It helps you tune your multi-threaded programs to behave correctly.
helgrind
Another thread error detector. Similar to
exp-drd but uses different techniques for
problem analysis.
massif
A heap profiler. Heap is an area of memory used for dynamic memory allocation. This tool helps you tune your program to use less memory.
lackey
An example tool showing instrumentation basics.
Valgrind can read options at start-up. There are three places which Valgrind checks:
The file .valgrindrc in the home directory of the
user who runs Valgrind.
The environment variable $VALGRIND_OPTS
The file .valgrindrc in the current directory
where Valgrind is run from.
These resources are parsed exactly in this order, while later given
options take precedence over earlier processed options. Options specific
to a particular Valgrind tool must be prefixed with the tool name and a
colon. For example, if you want cachegrind to
always write profile data to the
/tmp/cachegrind_PID.log,
add the following line to the .valgrindrc file in
your home directory:
--cachegrind:cachegrind-out-file=/tmp/cachegrind_%p.log
Valgrind takes control of your executable before it starts. It reads debugging information from the executable and related shared libraries. The executable's code is redirected to the selected Valgrind tool, and the tool adds its own code to handle its debugging. Then the code is handed back to the Valgrind core and the execution continues.
For example, memcheck adds its code, which
checks every memory access. As a consequence, the program runs much
slower than in the native execution environment.
Valgrind simulates every instruction of your program. Therefore, it not
only checks the code of your program, but also all related libraries
(including the C library), libraries used for graphical environment, and
so on. If you try to detect errors with Valgrind, it also detects errors
in associated libraries (like C, X11, or Gtk libraries). Because you
probably do not need these errors, Valgrind can selectively, suppress
these error messages to suppression files. The
--gen-suppressions=yes tells Valgrind to report these
suppressions which you can copy to a file.
You should supply a real executable (machine code) as a Valgrind
argument. If your application is run, for example, from a shell or Perl
script, you will by mistake get error reports related to
/bin/sh (or /usr/bin/perl). In
such cases, you can use
--trace-children=yes to work
around this issue. However, using the executable itself will avoid any
confusion over this issue.
During its runtime, Valgrind reports messages with detailed errors and important events. The following example explains the messages:
tux > valgrind --tool=memcheck find ~ -name .bashrc
[...]
==6558== Conditional jump or move depends on uninitialised value(s)
==6558== at 0x400AE79: _dl_relocate_object (in /lib64/ld-2.11.1.so)
==6558== by 0x4003868: dl_main (in /lib64/ld-2.11.1.so)
[...]
==6558== Conditional jump or move depends on uninitialised value(s)
==6558== at 0x400AE82: _dl_relocate_object (in /lib64/ld-2.11.1.so)
==6558== by 0x4003868: dl_main (in /lib64/ld-2.11.1.so)
[...]
==6558== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
==6558== malloc/free: in use at exit: 2,228 bytes in 8 blocks.
==6558== malloc/free: 235 allocs, 227 frees, 489,675 bytes allocated.
==6558== For counts of detected errors, rerun with: -v
==6558== searching for pointers to 8 not-freed blocks.
==6558== checked 122,584 bytes.
==6558==
==6558== LEAK SUMMARY:
==6558== definitely lost: 0 bytes in 0 blocks.
==6558== possibly lost: 0 bytes in 0 blocks.
==6558== still reachable: 2,228 bytes in 8 blocks.
==6558== suppressed: 0 bytes in 0 blocks.
==6558== Rerun with --leak-check=full to see details of leaked memory.
The ==6558== introduces Valgrind's messages and
contains the process ID number (PID). You can easily distinguish
Valgrind's messages from the output of the program itself, and decide
which messages belong to a particular process.
To make Valgrind's messages more detailed, use -v or
even -v -v.
You can make Valgrind send its messages to three different places:
By default, Valgrind sends its messages to the file descriptor 2,
which is the standard error output. You can tell Valgrind to send its
messages to any other file descriptor with the
--log-fd=FILE_DESCRIPTOR_NUMBER
option.
The second and probably more useful way is to send Valgrind's messages
to a file with
--log-file=FILENAME. This
option accepts several variables, for example, %p
gets replaced with the PID of the currently profiled process. This way
you can send messages to different files based on their PID.
%q{env_var} is replaced with the value of the
related env_var environment variable.
The following example checks for possible memory errors during the Apache Web server restart, while following children processes and writing detailed Valgrind's messages to separate files distinguished by the current process PID:
tux > valgrind -v --tool=memcheck --trace-children=yes \
--log-file=valgrind_pid_%p.log rcapache2 restart
This process created 52 log files in the testing system, and took 75
seconds instead of the usual 7 seconds needed to run sudo
systemctl restart apache2 without Valgrind, which is
approximately 10 times more.
tux > ls -1 valgrind_pid_*log
valgrind_pid_11780.log
valgrind_pid_11782.log
valgrind_pid_11783.log
[...]
valgrind_pid_11860.log
valgrind_pid_11862.log
valgrind_pid_11863.log
You may also prefer to send the Valgrind's messages over the network.
You need to specify the aa.bb.cc.dd IP address and
port_num port number of the network socket with the
--log-socket=AA.BB.CC.DD:PORT_NUM
option. If you omit the port number, 1500 will be used.
It is useless to send Valgrind's messages to a network socket if no
application is capable of receiving them on the remote machine. That
is why valgrind-listener, a simple listener, is
shipped together with Valgrind. It accepts connections on the
specified port and copies everything it receives to the standard
output.
Valgrind remembers all error messages, and if it detects a new error, the error is compared against old error messages. This way Valgrind checks for duplicate error messages. In case of a duplicate error, it is recorded but no message is shown. This mechanism prevents you from being overwhelmed by millions of duplicate errors.
The -v option will add a summary of all reports (sorted
by their total count) to the end of the Valgrind's execution output.
Moreover, Valgrind stops collecting errors if it detects either 1000
different errors, or 10 000 000 errors in total. If you want to suppress
this limit and wish to see all error messages, use
--error-limit=no.
Some errors usually cause other ones. Therefore, fix errors in the same order as they appear and re-check the program continuously.
For a complete list of options related to the described tracing tools,
see the corresponding man page (man 1 strace,
man 1 ltrace, and man 1
valgrind).
To describe advanced usage of Valgrind is beyond the scope of this document. It is very well documented, see Valgrind User Manual. These pages are indispensable if you need more advanced information on Valgrind or the usage and purpose of its standard tools.
Kexec is a tool to boot to another kernel from the currently running one. You can perform faster system reboots without any hardware initialization. You can also prepare the system to boot to another kernel if the system crashes.
With Kexec, you can replace the running kernel with another one without a hard reboot. The tool is useful for several reasons:
Faster system rebooting
If you need to reboot the system frequently, Kexec can save you significant time.
Avoiding unreliable firmware and hardware
Computer hardware is complex and serious problems may occur during the system start-up. You cannot always replace unreliable hardware immediately. Kexec boots the kernel to a controlled environment with the hardware already initialized. The risk of unsuccessful system start is then minimized.
Saving the dump of a crashed kernel
Kexec preserves the contents of the physical memory. After the production kernel fails, the capture kernel (an additional kernel running in a reserved memory range) saves the state of the failed kernel. The saved image can help you with the subsequent analysis.
Booting without GRUB 2 configuration
When the system boots a kernel with Kexec, it skips the boot loader stage. The normal booting procedure can fail because of an error in the boot loader configuration. With Kexec, you do not depend on a working boot loader configuration.
To use Kexec on openSUSE® Leap to speed up reboots or avoid potential
hardware problems, make sure that the package
kexec-tools is installed.
It contains a script called
kexec-bootloader, which reads the boot loader
configuration and runs Kexec using the same kernel options as the
normal boot loader.
To set up an environment that helps you obtain debug information
in case of a kernel crash, make sure that the package
makedumpfile is installed.
The preferred method of using Kdump in openSUSE Leap is through
the YaST Kdump module.
To use the YaST module, make sure that the package
yast2-kdump is installed.
The most important component of Kexec is the
/sbin/kexec command. You can load a kernel with
Kexec in two different ways:
Load the kernel to the address space of a production kernel for a regular reboot:
root #kexec-lKERNEL_IMAGE
You can later boot to this kernel with
kexec -e.
Load the kernel to a reserved area of memory:
root #kexec-pKERNEL_IMAGE
This kernel will be booted automatically when the system crashes.
If you want to boot another kernel and preserve the data of the production kernel when the system crashes, you need to reserve a dedicated area of the system memory. The production kernel never loads to this area because it must be always available. It is used for the capture kernel so that the memory pages of the production kernel can be preserved.
To reserve the area, append the option crashkernel
to the boot command line of the production kernel.
To determine the necessary values for crashkernel, follow
the instructions in Section 17.4, “Calculating crashkernel Allocation Size”.
Note that this is not a parameter of the capture kernel. The capture kernel does not use Kexec.
The capture kernel is loaded to the reserved area and waits for the kernel to crash. Then, Kdump tries to invoke the capture kernel because the production kernel is no longer reliable at this stage. This means that even Kdump can fail.
To load the capture kernel, you need to include the kernel boot
parameters. Usually, the initial RAM file system is used for booting. You
can specify it with
--initrd=FILENAME.
With
--append=CMDLINE,
you append options to the command line of the kernel to boot.
It is helpful to include the command line of
the production kernel if these options are necessary for the kernel to
boot. You can simply copy the command line with
--append="$(cat /proc/cmdline)"
or add more options with
--append="$(cat /proc/cmdline) more_options".
You can always unload the previously loaded kernel. To unload a kernel
that was loaded with the -l option, use the
kexec -u command. To unload a crash
kernel loaded with the -p option, use
kexec -p -u command.
crashkernel Allocation Size #To use Kexec with a capture kernel and to use Kdump in any way, RAM needs to be allocated for the capture kernel. The allocation size depends on the expected hardware configuration of the computer, therefore you need to specify it.
The allocation size also depends on the hardware architecture of your computer. Make sure to follow the procedure intended for your system architecture.
To find out the base value for the computer, run the following in a terminal:
root #kdumptoolcalibrate
This command returns a list of values. All values are given in megabytes.
Write down the values of Low and
High.
Low and High Values
On AMD64/Intel 64 computers, the High value stands
for the memory reservation for all available memory.
The Low value stands for the memory reservation
in the DMA32 zone, that is, all the memory up to the 4 GB mark.
If the computer has less than 4 GB of RAM, the
High memory reservation is allocated and the
Low memory reservation is ignored.
If the computer has more than 4 GB of RAM, the Low
memory reservation is allocated additionally.
Adapt the High value from the previous step for
the number of LUN kernel paths (paths to storage devices) attached to the
computer.
A sensible value in megabytes can be calculated using this formula:
SIZE_HIGH = RECOMMENDATION + (LUNs / 2)
The following parameters are used in this formula:
SIZE_HIGH.
The resulting value for High.
RECOMMENDATION.
The value recommended by kdumptool calibrate
for High.
LUNs. The maximum number of LUN kernel paths that you expect to ever create on the computer. Exclude multipath devices from this number, as these are ignored.
For machines that have multiple terabytes (!) of RAM, such as many servers running SAP HANA, you need to additionally adjust the amount of both Kdump High and Low Memory.
Experience suggests that in such cases, you might be successful using the following formulas:
SIZE_HIGH = (RECOMMENDATION * RAM_IN_TB) + (LUNs / 2)
SIZE_LOW = (RECOMMENDATION * RAM_IN_TB) + CUSTOM_DRIVER-RESERVATION_ADJUSTMENT
If the drivers for your device make many reservations in the DMA32 zone,
the Low value also needs to be adjusted.
However, there is no simple formula to calculate these.
Finding the right size can therefore be a process of trial and error.
For the beginning, use the Low value recommended by
kdumptool calibrate.
The values now need to be set in the correct location.
Append the following kernel option to your boot loader configuration:
crashkernel=SIZE_HIGH,high crashkernel=SIZE_LOW,low
Replace the placeholders SIZE_HIGH and
SIZE_LOW with the appropriate value from the
previous steps and append the letter M
(for megabytes).
As an example, the following is valid:
crashkernel=36M,high crashkernel=72M,low
Set to the determined
Low value.
Set to the determined
High value.
Use the following command:
root # yast kdump startup enable alloc_mem=LOW,HIGH
Replace LOW with the determined
Low value. Replace
HIGH with the determined
HIGH value.
To find out the basis value for the computer, run the following in a terminal:
root #kdumptoolcalibrate
This command returns a list of values. All values are given in megabytes.
Write down the value of Low.
Adapt the Low value from the previous step for
the number of LUN kernel paths (paths to storage devices) attached to the
computer.
A sensible value in megabytes can be calculated using this formula:
SIZE_LOW = RECOMMENDATION + (LUNs / 2)
The following parameters are used in this formula:
SIZE_LOW.
The resulting value for Low.
RECOMMENDATION.
The value recommended by kdumptool calibrate
for Low.
LUNs. The maximum number of LUN kernel paths that you expect to ever create on the computer. Exclude multipath devices from this number, as these are ignored.
The values now need to be set in the correct location.
Append the following kernel option to your boot loader configuration:
crashkernel=SIZE_LOW
Replace the placeholderSIZE_LOW with the
appropriate value from the previous step and append the letter
M (for megabytes).
As an example, the following is valid:
crashkernel=108M
Set to the determined
Low value.
Depending on the number of available devices the calculated amount of
memory specified by the crashkernel kernel parameter may
not be sufficient. Instead of increasing the value, you may alternatively
limit the amount of devices visible to the kernel. This will lower the
required amount of memory for the "crashkernel" setting.
To ignore devices you can run the cio_ignore tool to
generate an appropriate stanza to ignore all devices, except the ones
currently active or in use.
tux >sudocio_ignore -u -k cio_ignore=all,!da5d,!f500-f502
When you run cio_ignore -u -k, the blacklist will
become active and replace any existing blacklist immediately. Unused
devices are not being purged, so they still appear in the channel
subsystem. But adding new channel devices (via CP ATTACH under z/VM or
dynamic I/O configuration change in LPAR) will treat them as
blacklisted. To prevent this, preserve the original setting by running
sudo cio_ignore -l first and reverting to that
state after running cio_ignore -u -k. As an
alternative, add the generated stanza to the regular kernel boot
parameters.
Now add the cio_ignore kernel parameter with the stanza
from above to KDUMP_CMDLINE_APPEND in
/etc/sysconfig/kdump, for example:
KDUMP_COMMANDLINE_APPEND="cio_ignore=all,!da5d,!f500-f502"
Activate the setting by restarting kdump:
systemctl restart kdump.service
To verify if your Kexec environment works properly, follow these steps:
Make sure no users are currently logged in and no important services are running on the system.
Log in as root.
Switch to the rescue target with systemctl isolate
rescue.target
Load the new kernel to the address space of the production kernel with the following command:
root #kexec-l /boot/vmlinuz --append="$(cat /proc/cmdline)" \ --initrd=/boot/initrd
Unmount all mounted file systems except the root file system with:
umount-a
Unmounting all file systems will most likely produce a device
is busy warning message. The root file system cannot be
unmounted if the system is running. Ignore the warning.
Remount the root file system in read-only mode:
root #mount-o remount,ro /
Initiate the reboot of the kernel that you loaded in Step 4 with:
root #kexec-e
It is important to unmount the previously mounted disk volumes in
read-write mode. The reboot system call acts
immediately upon calling. Hard disk volumes mounted in read-write mode
neither synchronize nor unmount automatically. The new kernel may find
them “dirty”. Read-only disk volumes and virtual file
systems do not need to be unmounted. Refer to
/etc/mtab to determine which file systems you need
to unmount.
The new kernel previously loaded to the address space of the older kernel rewrites it and takes control immediately. It displays the usual start-up messages. When the new kernel boots, it skips all hardware and firmware checks. Make sure no warning messages appear. All file systems are supposed to be clean if they had been unmounted.
Kexec is often used for frequent reboots. For example, if it takes a long time to run through the hardware detection routines or if the start-up is not reliable.
Note that firmware and the boot loader are not used when the system reboots with Kexec. Any changes you make to the boot loader configuration will be ignored until the computer performs a hard reboot.
You can use Kdump to save kernel dumps. If the kernel crashes, it is useful to copy the memory image of the crashed environment to the file system. You can then debug the dump file to find the cause of the kernel crash. This is called “core dump”.
Kdump works similarly to Kexec (see Chapter 17, Kexec and Kdump). The capture kernel is executed after the running production kernel crashes. The difference is that Kexec replaces the production kernel with the capture kernel. With Kdump, you still have access to the memory space of the crashed production kernel. You can save the memory snapshot of the crashed kernel in the environment of the Kdump kernel.
In environments with limited local storage, you need to set up kernel
dumps over the network. Kdump supports configuring the specified
network interface and bringing it up via
initrd. Both LAN and VLAN interfaces are
supported. Specify the network interface and the mode (DHCP or static)
either with YaST, or using the KDUMP_NETCONFIG
option in the /etc/sysconfig/kdump file.
When configuring Kdump, you can specify a location to which the
dumped images will be saved (default: /var/crash).
This location must be mounted when configuring Kdump, otherwise the
configuration will fail.
Kdump reads its configuration from the
/etc/sysconfig/kdump file. To make sure that
Kdump works on your system, its default configuration is
sufficient. To use Kdump with the default settings, follow these
steps:
Determine the amount of memory needed for Kdump by following the
instructions in Section 17.4, “Calculating crashkernel Allocation Size”.
Make sure to set the kernel parameter crashkernel.
Reboot the computer.
Enable the Kdump service:
root #systemctlenable kdump
You can edit the options in /etc/sysconfig/kdump.
Reading the comments will help you understand the meaning of
individual options.
Execute the init script once with sudo systemctl start
kdump, or reboot the system.
After configuring Kdump with the default values, check if it works as expected. Make sure that no users are currently logged in and no important services are running on your system. Then follow these steps:
Switch to the rescue target with systemctl isolate
rescue.target
Restart the Kdump service:
root #systemctlstart kdump
Unmount all the disk file systems except the root file system with:
root #umount-a
Remount the root file system in read-only mode:
root #mount-o remount,ro /
Invoke a “kernel panic” with the procfs
interface to Magic SysRq keys:
root #echoc > /proc/sysrq-trigger
The KDUMP_KEEP_OLD_DUMPS option controls the number
of preserved kernel dumps (default is 5). Without compression, the size
of the dump can take up to the size of the physical RAM memory. Make
sure you have sufficient space on the /var
partition.
The capture kernel boots and the crashed kernel memory snapshot is saved
to the file system. The save path is given by the
KDUMP_SAVEDIR option and it defaults to
/var/crash. If
KDUMP_IMMEDIATE_REBOOT is set to
yes , the system automatically reboots the production
kernel. Log in and check that the dump has been created under
/var/crash.
In case Kdump is configured to use a static IP configuration from a
network device, you need to add the network configuration to the
KDUMP_COMMANDLINE_APPEND variable in
/etc/sysconfig/kdump.
The following setup has been configured:
eth0 has been configured with the static IP address 192.168.1.1/24
eth1 has been configured with the static IP address 10.50.50.100/20
The Kdump configuration in /etc/sysconfig/kdump
looks like:
KDUMP_CPUS=1 KDUMP_IMMEDIATE_REBOOT=yes KDUMP_SAVEDIR=ftp://anonymous@10.50.50.140/crashdump/ KDUMP_KEEP_OLD_DUMPS=5 KDUMP_FREE_DISK_SIZE=64 KDUMP_VERBOSE=3 KDUMP_DUMPLEVEL=31 KDUMP_DUMPFORMAT=lzo KDUMP_CONTINUE_ON_ERROR=yes KDUMP_NETCONFIG=eth1:static KDUMP_NET_TIMEOUT=30
Using this configuration, Kdump fails to reach the network when trying
to write the dump to the FTP server. To solve this issue, add the network
configuration to KDUMP_COMMANDLINE_APPEND in
/etc/sysconfig/kdump. The general pattern for this
looks like the following:
KDUMP_COMMANDLINE_APPEND='ip=CLIENT IP:SERVER IP:GATEWAY IP:NETMASK:CLIENT HOSTNAME:DEVICE:PROTOCOL'
For the example configuration this would result in:
KDUMP_COMMANDLINE_APPEND='ip=10.50.50.100:10.50.50.140:10.60.48.1:255.255.240.0:dump-client:eth1:none'
To configure Kdump with YaST, you need to install the
yast2-kdump package. Then either start the
module in the
category of , or enter yast2 kdump in the
command line as root.
In the window, select .
The values for are automatically generated
the first time you open the window.
However, that does not mean that they are always sufficient.
To set the right values, follow the instructions in
Section 17.4, “Calculating crashkernel Allocation Size”.
If you have set up Kdump on a computer and later decide to change the amount of RAM or hard disks available to it, YaST will continue to display and use outdated memory values.
To work around this, determine the necessary memory again, as described in
Section 17.4, “Calculating crashkernel Allocation Size”.
Then set it manually in YaST.
Click in the left pane, and check what pages to include in the dump. You do not need to include the following memory content to be able to debug kernel problems:
Pages filled with zero
Cache pages
User data pages
Free pages
In the window, select the type of the dump target and the URL where you want to save the dump. If you selected a network protocol, such as FTP or SSH, you need to enter relevant access information as well.
It is possible to specify a path for saving Kdump dumps where other applications also save their dumps. When cleaning its old dump files, Kdump will safely ignore other applications' dump files.
Fill the window information if you want Kdump to inform you about its events via e-mail and confirm your changes with after fine tuning Kdump in the window. Kdump is now configured.
After you obtain the dump, it is time to analyze it. There are several options.
The original tool to analyze the dumps is GDB. You can even use it in the latest environments, although it has several disadvantages and limitations:
GDB was not specifically designed to debug kernel dumps.
GDB does not support ELF64 binaries on 32-bit platforms.
GDB does not understand other formats than ELF dumps (it cannot debug compressed dumps).
That is why the crash utility was implemented. It
analyzes crash dumps and debugs the running system as well. It provides
functionality specific to debugging the Linux kernel and is much more
suitable for advanced debugging.
If you want to debug the Linux kernel, you need to install its debugging information package in addition. Check if the package is installed on your system with:
tux >zypperse kernel |grepdebug
If you subscribed your system for online updates, you can find
“debuginfo” packages in the
*-Debuginfo-Updates online installation repository
relevant for openSUSE Leap 42.3. Use YaST to
enable the repository.
To open the captured dump in crash on the machine that
produced the dump, use a command like this:
crash /boot/vmlinux-2.6.32.8-0.1-default.gz \
/var/crash/2010-04-23-11\:17/vmcore
The first parameter represents the kernel image. The second parameter is
the dump file captured by Kdump. You can find this file under
/var/crash by default.
openSUSE Leap ships with the utility kdumpid
(included in a package with the same name) for identifying unknown
kernel dumps. It can be used to extract basic information such as
architecture and kernel release. It supports lkcd, diskdump, Kdump
files and ELF dumps. When called with the -v
switch it tries to extract additional information such as machine type,
kernel banner string and kernel configuration flavor.
The Linux kernel comes in Executable and Linkable Format (ELF). This
file is usually called vmlinux and is directly
generated in the compilation process. Not all boot loaders support
ELF binaries, especially on the AMD64/Intel 64 architecture.
The following solutions exist on different architectures supported by
openSUSE® Leap.
Kernel packages for AMD64/Intel 64 from SUSE contain two kernel
files: vmlinuz and vmlinux.gz.
vmlinuz.
This is the file executed by the boot loader.
The Linux kernel consists of two parts:
the kernel itself (vmlinux) and the setup code
run by the boot loader.
These two parts are linked together to create
vmlinuz
(note the distinction: z compared to x).
In the kernel source tree, the file is called
bzImage.
vmlinux.gz.
This is a compressed ELF image that can be used by
crash and GDB.
The ELF image is never used by the boot loader itself on AMD64/Intel 64.
Therefore, only a compressed version is shipped.
The yaboot boot loader on POWER also supports
loading ELF images, but not compressed ones. In the POWER kernel package,
there is an ELF Linux kernel file vmlinux.
Considering crash, this is the easiest
architecture.
If you decide to analyze the dump on another machine, you must check both the architecture of the computer and the files necessary for debugging.
You can analyze the dump on another computer only if it runs a Linux
system of the same architecture. To check the compatibility, use the
command uname -i on both computers
and compare the outputs.
If you are going to analyze the dump on another computer, you also need
the appropriate files from the kernel and
kernel debug packages.
Put the kernel dump, the kernel image from
/boot, and its associated debugging info file
from /usr/lib/debug/boot into a single empty
directory.
Additionally, copy the kernel modules from
/lib/modules/$(uname -r)/kernel/ and the
associated debug info files from
/usr/lib/debug/lib/modules/$(uname -r)/kernel/
into a subdirectory named modules.
In the directory with the dump, the kernel image, its debug info
file, and the modules subdirectory, start the
crash utility:
tux >crashVMLINUX-VERSION vmcore
Regardless of the computer on which you analyze the dump, the crash utility will produce output similar to this:
tux >crash/boot/vmlinux-2.6.32.8-0.1-default.gz \ /var/crash/2010-04-23-11\:17/vmcore crash 4.0-7.6 Copyright (C) 2002, 2003, 2004, 2005, 2006, 2007, 2008 Red Hat, Inc. Copyright (C) 2004, 2005, 2006 IBM Corporation Copyright (C) 1999-2006 Hewlett-Packard Co Copyright (C) 2005, 2006 Fujitsu Limited Copyright (C) 2006, 2007 VA Linux Systems Japan K.K. Copyright (C) 2005 NEC Corporation Copyright (C) 1999, 2002, 2007 Silicon Graphics, Inc. Copyright (C) 1999, 2000, 2001, 2002 Mission Critical Linux, Inc. This program is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Enter "help copying" to see the conditions. This program has absolutely no warranty. Enter "help warranty" for details. GNU gdb 6.1 Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "x86_64-unknown-linux-gnu"... KERNEL: /boot/vmlinux-2.6.32.8-0.1-default.gz DEBUGINFO: /usr/lib/debug/boot/vmlinux-2.6.32.8-0.1-default.debug DUMPFILE: /var/crash/2009-04-23-11:17/vmcore CPUS: 2 DATE: Thu Apr 23 13:17:01 2010 UPTIME: 00:10:41 LOAD AVERAGE: 0.01, 0.09, 0.09 TASKS: 42 NODENAME: eros RELEASE: 2.6.32.8-0.1-default VERSION: #1 SMP 2010-03-31 14:50:44 +0200 MACHINE: x86_64 (2999 Mhz) MEMORY: 1 GB PANIC: "SysRq : Trigger a crashdump" PID: 9446 COMMAND: "bash" TASK: ffff88003a57c3c0 [THREAD_INFO: ffff880037168000] CPU: 1 STATE: TASK_RUNNING (SYSRQ)crash>
The command output prints first useful data: There were 42 tasks
running at the moment of the kernel crash. The cause of the crash was a
SysRq trigger invoked by the task with PID 9446. It was a Bash process
because the echo that has been used is an internal
command of the Bash shell.
The crash utility builds upon GDB and provides
many additional commands. If you enter bt
without any parameters, the backtrace of the task running at the moment
of the crash is printed:
crash>btPID: 9446 TASK: ffff88003a57c3c0 CPU: 1 COMMAND: "bash" #0 [ffff880037169db0] crash_kexec at ffffffff80268fd6 #1 [ffff880037169e80] __handle_sysrq at ffffffff803d50ed #2 [ffff880037169ec0] write_sysrq_trigger at ffffffff802f6fc5 #3 [ffff880037169ed0] proc_reg_write at ffffffff802f068b #4 [ffff880037169f10] vfs_write at ffffffff802b1aba #5 [ffff880037169f40] sys_write at ffffffff802b1c1f #6 [ffff880037169f80] system_call_fastpath at ffffffff8020bfbb RIP: 00007fa958991f60 RSP: 00007fff61330390 RFLAGS: 00010246 RAX: 0000000000000001 RBX: ffffffff8020bfbb RCX: 0000000000000001 RDX: 0000000000000002 RSI: 00007fa959284000 RDI: 0000000000000001 RBP: 0000000000000002 R8: 00007fa9592516f0 R9: 00007fa958c209c0 R10: 00007fa958c209c0 R11: 0000000000000246 R12: 00007fa958c1f780 R13: 00007fa959284000 R14: 0000000000000002 R15: 00000000595569d0 ORIG_RAX: 0000000000000001 CS: 0033 SS: 002bcrash>
Now it is clear what happened: The internal echo
command of Bash shell sent a character to
/proc/sysrq-trigger. After the corresponding
handler recognized this character, it invoked the
crash_kexec() function. This function called
panic() and Kdump saved a dump.
In addition to the basic GDB commands and the extended version of
bt, the crash utility defines other commands
related to the structure of the Linux kernel. These commands understand
the internal data structures of the Linux kernel and present their
contents in a human readable format. For example, you can list the
tasks running at the moment of the crash with ps.
With sym, you can list all the kernel symbols with
the corresponding addresses, or inquire an individual symbol for its
value. With files, you can display all the open file
descriptors of a process. With kmem, you can display
details about the kernel memory usage. With vm, you
can inspect the virtual memory of a process, even at the level of
individual page mappings. The list of useful commands is very long and
many of these accept a wide range of options.
The commands that we mentioned reflect the functionality of the common
Linux commands, such as ps and
lsof. To find out the exact sequence of
events with the debugger, you need to know how to use GDB and to have
strong debugging skills. Both of these are out of the scope of this
document. In addition, you need to understand the Linux kernel. Several
useful reference information sources are given at the end of this
document.
The configuration for Kdump is stored in
/etc/sysconfig/kdump. You can also use YaST to
configure it. Kdump configuration options are available under
› in . The following Kdump
options may be useful for you.
You can change the directory for the kernel dumps with the
KDUMP_SAVEDIR option. Keep in mind that the size of
kernel dumps can be very large. Kdump will refuse to save the dump
if the free disk space, subtracted by the estimated dump size, drops
below the value specified by the KDUMP_FREE_DISK_SIZE
option. Note that KDUMP_SAVEDIR understands the URL format
PROTOCOL://SPECIFICATION, where
PROTOCOL is one of file,
ftp, sftp, nfs or
cifs, and specification varies for each
protocol. For example, to save kernel dump on an FTP server, use the
following URL as a template:
ftp://username:password@ftp.example.com:123/var/crash.
Kernel dumps are usually huge and contain many pages that are not
necessary for analysis. With KDUMP_DUMPLEVEL option,
you can omit such pages. The option understands numeric value between 0
and 31. If you specify 0, the dump size will
be largest. If you specify 31, it will produce
the smallest dump. For a complete table of possible values, see the
manual page of kdump (man 7 kdump).
Sometimes it is very useful to make the size of the kernel dump smaller.
For example, if you want to transfer the dump over the network, or if you
need to save some disk space in the dump directory. This can be done with
KDUMP_DUMPFORMAT set to compressed. The
crash utility supports dynamic decompression of the
compressed dumps.
You always need to execute systemctl restart kdump
after you make manual changes to
/etc/sysconfig/kdump. Otherwise, these changes will
take effect next time you reboot the system.
There is no single comprehensive reference to Kexec and Kdump usage. However, there are helpful resources that deal with certain aspects:
For the Kexec utility usage, see the manual page of
kexec (man 8 kexec).
IBM provides a comprehensive documentation on how to use dump tools on the z Systems architecture at http://www.ibm.com/developerworks/linux/linux390/development_documentation.html.
You can find general information about Kexec at http://www.ibm.com/developerworks/linux/library/l-kexec.html . Might be slightly outdated.
For more details on Kdump specific to openSUSE Leap, see http://ftp.suse.com/pub/people/tiwai/kdump-training/kdump-training.pdf .
An in-depth description of Kdump internals can be found at http://lse.sourceforge.net/kdump/documentation/ols2oo5-kdump-paper.pdf .
For more details on crash dump analysis and
debugging tools, use the following resources:
In addition to the info page of GDB (info gdb),
there are printable guides at
http://sourceware.org/gdb/documentation/ .
A white paper with a comprehensive description of the crash utility usage can be found at http://people.redhat.com/anderson/crash_whitepaper/.
The crash utility also features a comprehensive online help. Use
help COMMAND to display
the online help for command.
If you have the necessary Perl skills, you can use Alicia to make the debugging easier. This Perl-based front-end to the crash utility can be found at http://alicia.sourceforge.net/ .
If you prefer to use Python instead, you should install Pykdump. This package helps you control GDB through Python scripts and can be downloaded from http://sf.net/projects/pykdump .
A very comprehensive overview of the Linux kernel internals is given in Understanding the Linux Kernel by Daniel P. Bovet and Marco Cesati (ISBN 978-0-596-00565-8).
For network environments, it is vital to keep the computer and other devices' clocks synchronized and accurate. There are several solutions to achieve this, for example the widely used Network Time Protocol (NTP) described in Chapter 18, Time Synchronization with NTP.
For network environments, it is vital to keep the computer and other devices' clocks synchronized and accurate. There are several solutions to achieve this, for example the widely used Network Time Protocol (NTP) described in Chapter 18, Time Synchronization with NTP.
The Precision Time Protocol (PTP) is a protocol capable of sub-microsecond accuracy, which is better than what NTP achieves. PTP support is divided between the kernel and user space. The kernel in openSUSE Leap includes support for PTP clocks, which are provided by network drivers.
The clocks managed by PTP follow a master-slave hierarchy. The slaves are synchronized to their masters. The hierarchy is updated by the best master clock (BMC) algorithm, which runs on every clock. The clock with only one port can be either master or slave. Such a clock is called an ordinary clock (OC). A clock with multiple ports can be master on one port and slave on another. Such a clock is called a boundary clock (BC). The top-level master is called the grandmaster clock. The grandmaster clock can be synchronized with a Global Positioning System (GPS). This way disparate networks can be synchronized with a high degree of accuracy.
The hardware support is the main advantage of PTP. It is supported by various network switches and network interface controllers (NIC). While it is possible to use non-PTP enabled hardware within the network, having network components between all PTP clocks PTP hardware enabled achieves the best possible accuracy.
On openSUSE Leap, the implementation of PTP is provided by the
linuxptp package. Install it with zypper
install linuxptp. It includes the ptp4l and
phc2sys programs for clock synchronization.
ptp4l implements the PTP boundary clock and ordinary
clock. When hardware time stamping is enabled, ptp4l
synchronizes the PTP hardware clock to the master clock. With software time
stamping, it synchronizes the system clock to the master clock.
phc2sys is needed only with hardware time stamping to
synchronize the system clock to the PTP hardware clock on the network
interface card (NIC).
PTP requires that the used kernel network driver supports either software
or hardware time stamping. Moreover, the NIC must support time stamping in
the physical hardware. You can verify the driver and NIC time stamping
capabilities with ethtool:
tux >sudoethtool -T eth0 Time stamping parameters for eth0: Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE) hardware-receive (SOF_TIMESTAMPING_RX_HARDWARE) software-receive (SOF_TIMESTAMPING_RX_SOFTWARE) software-system-clock (SOF_TIMESTAMPING_SOFTWARE) hardware-raw-clock (SOF_TIMESTAMPING_RAW_HARDWARE) PTP Hardware Clock: 0 Hardware Transmit Timestamp Modes: off (HWTSTAMP_TX_OFF) on (HWTSTAMP_TX_ON) Hardware Receive Filter Modes: none (HWTSTAMP_FILTER_NONE) all (HWTSTAMP_FILTER_ALL)
Software time stamping requires the following parameters:
SOF_TIMESTAMPING_SOFTWARE SOF_TIMESTAMPING_TX_SOFTWARE SOF_TIMESTAMPING_RX_SOFTWARE
Hardware time stamping requires the following parameters:
SOF_TIMESTAMPING_RAW_HARDWARE SOF_TIMESTAMPING_TX_HARDWARE SOF_TIMESTAMPING_RX_HARDWARE
ptp4l #
ptp4l uses hardware time stamping by default. As
root, you need to specify the network interface capable of hardware
time stamping with the -i option. The -m
tells ptp4l to print its output to the standard output
instead of the system's logging facility:
tux >sudoptp4l -m -i eth0 selected eth0 as PTP clock port 1: INITIALIZING to LISTENING on INITIALIZE port 0: INITIALIZING to LISTENING on INITIALIZE port 1: new foreign master 00a152.fffe.0b334d-1 selected best master clock 00a152.fffe.0b334d port 1: LISTENING to UNCALIBRATED on RS_SLAVE master offset -25937 s0 freq +0 path delay 12340 master offset -27887 s0 freq +0 path delay 14232 master offset -38802 s0 freq +0 path delay 13847 master offset -36205 s1 freq +0 path delay 10623 master offset -6975 s2 freq -30575 path delay 10286 port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED master offset -4284 s2 freq -30135 path delay 9892
The master offset value represents the measured offset
from the master (in nanoseconds).
The s0, s1, s2
indicators show the different states of the clock servo:
s0 is unlocked, s1 is clock step, and
s2 is locked. If the servo is in the locked state
(s2), the clock will not be stepped (only slowly
adjusted) if the pi_offset_const option is set to a
negative value in the configuration file (see man 8
ptp4l for more information).
The freq value represents the frequency adjustment of
the clock (in parts per billion, ppb).
The path delay value represents the estimated delay of
the synchronization messages sent from the master (in nanoseconds).
Port 0 is a Unix domain socket used for local PTP management. Port 1 is the
eth0 interface.
INITIALIZING, LISTENING,
UNCALIBRATED and SLAVE are examples
of port states which change on INITIALIZE,
RS_SLAVE, and MASTER_CLOCK_SELECTED
events. When the port state changes from UNCALIBRATED to
SLAVE, the computer has successfully synchronized with a
PTP master clock.
You can enable software time stamping with the -S option.
tux >sudoptp4l -m -S -i eth3
You can also run ptp4l as a service:
tux >sudosystemctl start ptp4l
In this case, ptp4l reads its options from the
/etc/sysconfig/ptp4l file. By default, this file tells
ptp4l to read the configuration options from
/etc/ptp4l.conf. For more information on
ptp4l options and the configuration file settings, see
man 8 ptp4l.
To enable the ptp4l service permanently, run the
following:
tux >sudosystemctl enable ptp4l
To disable it, run
tux >sudosystemctl disable ptp4l
ptp4l Configuration File #
ptp4l can read its configuration from an optional
configuration file. As no configuration file is used by default, you need
to specify it with -f.
tux >sudoptp4l -f /etc/ptp4l.conf
The configuration file is divided into sections. The global section
(indicated as [global]) sets the program options, clock
options and default port options. Other sections are port specific, and
they override the default port options. The name of the section is the name
of the configured port—for example, [eth0]. An
empty port section can be used to replace the command line option.
[global] verbose 1 time_stamping software [eth0]
The example configuration file is an equivalent of the following command's options:
tux >sudoptp4l -i eth0 -m -S
For a complete list of ptp4l configuration options, see
man 8 ptp4l.
ptp4l measures time delay in two different ways:
peer-to-peer (P2P) or end-to-end
(E2E).
This method is specified with -P.
It reacts to changes in the network environment faster and is more accurate in measuring the delay. It is only used in networks where each port exchanges PTP messages with one other port. P2P needs to be supported by all hardware on the communication path.
This method is specified with -E. This is the default.
This method is specified with -A. The automatic option
starts ptp4l in E2E mode, and changes to P2P mode if
a peer delay request is received.
All clocks on a single PTP communication path must use the same method to measure the time delay. A warning will be printed if either a peer delay request is received on a port using the E2E mechanism, or an E2E delay request is received on a port using the P2P mechanism.
pmc #
You can use the pmc client to obtain more detailed
information about ptp41. It reads from the standard
input—or from the command line—actions specified by name and
management ID. Then it sends the actions over the selected transport, and
prints any received replies. There are three actions supported:
GET retrieves the specified information,
SET updates the specified information, and
CMD (or COMMAND) initiates the
specified event.
By default, the management commands are addressed to all ports. The
TARGET command can be used to select a particular clock
and port for the subsequent messages. For a complete list of management
IDs, run pmc help.
tux >sudopmc -u -b 0 'GET TIME_STATUS_NP' sending: GET TIME_STATUS_NP 90f2ca.fffe.20d7e9-0 seq 0 RESPONSE MANAGMENT TIME_STATUS_NP master_offset 283 ingress_time 1361569379345936841 cumulativeScaledRateOffset +1.000000000 scaledLastGmPhaseChange 0 gmTimeBaseIndicator 0 lastGmPhaseChange 0x0000'0000000000000000.0000 gmPresent true gmIdentity 00b058.feef.0b448a
The -b option specifies the boundary hops value in sent
messages. Setting it to zero limits the boundary to the local
ptp4l instance. Increasing the value will retrieve the
messages also from PTP nodes that are further from the local instance. The
returned information may include:
The number of communication nodes to the grandmaster clock.
The last measured offset of the clock from the master clock (nanoseconds).
The estimated delay of the synchronization messages sent from the master clock (nanoseconds).
If true, the PTP clock is synchronized to the master
clock; the local clock is not the grandmaster clock.
This is the grandmaster's identity.
For a complete list of pmc command line options, see
man 8 pmc.
phc2sys #
Use phc2sys to synchronize the system clock to the PTP
hardware clock (PHC) on the network card. The system clock is considered a
slave, while the network card a
master. PHC itself is synchronized with
ptp4l (see Section 18.2, “Using PTP”). Use
-s to specify the master clock by device or network
interface. Use -w to wait until ptp4l is
in a synchronized state.
tux >sudophc2sys -s eth0 -w
PTP operates in International Atomic Time (TAI), while
the system clock uses Coordinated Universal Time (UTC).
If you do not specify -w to wait for
ptp4l synchronization, you can specify the offset in
seconds between TAI and UTC with -O:
tux >sudophc2sys -s eth0 -O -35
You can run phc2sys as a service as well:
tux >sudosystemctl start phc2sys
In this case, phc2sys reads its options from the
/etc/sysconfig/phc2sys file. For more information on
phc2sys options, see man 8 phc2sys.
To enable the phc2sys service permanently, run the
following:
tux >sudosystemctl enable phc2sys
To disable it, run
tux >sudosystemctl disable phc2sys
When PTP time synchronization is working properly and hardware time
stamping is used, ptp4l and phc2sys
output messages with time offsets and frequency adjustments periodically to
the system log.
An example of the ptp4l output:
ptp4l[351.358]: selected /dev/ptp0 as PTP clock ptp4l[352.361]: port 1: INITIALIZING to LISTENING on INITIALIZE ptp4l[352.361]: port 0: INITIALIZING to LISTENING on INITIALIZE ptp4l[353.210]: port 1: new foreign master 00a069.eefe.0b442d-1 ptp4l[357.214]: selected best master clock 00a069.eefe.0b662d ptp4l[357.214]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE ptp4l[359.224]: master offset 3304 s0 freq +0 path delay 9202 ptp4l[360.224]: master offset 3708 s1 freq -28492 path delay 9202 ptp4l[361.224]: master offset -3145 s2 freq -32637 path delay 9202 ptp4l[361.224]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED ptp4l[362.223]: master offset -145 s2 freq -30580 path delay 9202 ptp4l[363.223]: master offset 1043 s2 freq -28436 path delay 8972 [...] ptp4l[371.235]: master offset 285 s2 freq -28511 path delay 9199 ptp4l[372.235]: master offset -78 s2 freq -28788 path delay 9204
An example of the phc2sys output:
phc2sys[616.617]: Waiting for ptp4l... phc2sys[628.628]: phc offset 66341 s0 freq +0 delay 2729 phc2sys[629.628]: phc offset 64668 s1 freq -37690 delay 2726 [...] phc2sys[646.630]: phc offset -333 s2 freq -37426 delay 2747 phc2sys[646.630]: phc offset 194 s2 freq -36999 delay 2749
ptp4l normally writes messages very frequently. You can
reduce the frequency with the summary_interval
directive. Its value is an exponent of the 2^N expression. For example, to
reduce the output to every 1024 (which is equal to 2^10) seconds, add the
following line to the /etc/ptp4l.conf file:
summary_interval 10
You can also reduce the frequency of the phc2sys
command's updates with the -u
SUMMARY-UPDATES option.
This section includes several examples of ptp4l
configuration. The examples are not full configuration files but rather a
minimal list of changes to be made to the specific files. The string
ethX stands for the actual network interface name
in your setup.
/etc/sysconfig/ptp4l:
OPTIONS=”-f /etc/ptp4l.conf -i ethX”
No changes made to the distribution /etc/ptp4l.conf.
/etc/sysconfig/ptp4l:
OPTIONS=”-f /etc/ptp4l.conf -i ethX”
/etc/sysconfig/phc2sys:
OPTIONS=”-s ethX -w”
No changes made to the distribution /etc/ptp4l.conf.
/etc/sysconfig/ptp4l:
OPTIONS=”-f /etc/ptp4l.conf -i ethX”
/etc/sysconfig/phc2sys:
OPTIONS=”-s CLOCK_REALTIME -c ethX -w”
/etc/ptp4l.conf:
priority1 127
/etc/sysconfig/ptp4l:
OPTIONS=”-f /etc/ptp4l.conf -i ethX”
/etc/ptp4l.conf:
priority1 127
NTP and PTP time synchronization tools can coexist, synchronizing time from one to another in both directions.
When chronyd is used to synchronize the local system clock, you can
configure the ptp4l to be the grandmaster clock
distributing the time from the local system clock via PTP. Include the
priority1 option in /etc/ptp4l.conf:
[global] priority1 127 [eth0]
Then run ptp4l:
tux >sudoptp4l -f /etc/ptp4l.conf
When hardware time stamping is used, you need to synchronize the PTP
hardware clock to the system clock with phc2sys:
tux >sudophc2sys -c eth0 -s CLOCK_REALTIME -w
If a highly accurate PTP grandmaster is available in a network without switches or routers with PTP support, a computer may operate as a PTP slave and a stratum-1 NTP server. Such a computer needs to have two or more network interfaces, and be close to the grandmaster or have a direct connection to it. This will ensure highly accurate synchronization in the network.
Configure the ptp4l and phc2sys
programs to use one network interface to synchronize the system clock using
PTP. Then configure chronyd to provide the system time using the other
interface:
bindaddress 192.0.131.47 hwtimestamp eth1 local stratum 1
When the DHCP client command dhclient receives a list
of NTP servers, it adds them to NTP configuration by default. To prevent
this behavior, set
NETCONFIG_NTP_POLICY=""
in the /etc/sysconfig/network/config file.
This appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
libvirtlibvirtdvirshxl create Changed Optionsxm create Removed Optionsxl create Added Optionsxl console Added Optionsxm info Removed Optionsxm dump-core Removed Optionsxm list Removed Optionsxl list Added Optionsxl mem-* Changed Optionsxm migrate Removed Optionsxl migrate Added Optionsxm reboot Removed Optionsxl reboot Added Optionsxl save Added Optionsxl restore Added Optionsxm shutdown Removed Optionsxl shutdown Added Optionsxl trigger Changed Optionsxm sched-credit Removed Optionsxl sched-credit Added Optionsxm sched-credit2 Removed Optionsxl sched-credit2 Added Optionsxm sched-sedf Removed Optionsxl sched-sedf Added Optionsxm cpupool-list Removed Optionsxm cpupool-create Removed Optionsxl pci-detach Added Optionsxm block-list Removed Optionsxl network-attach Removed Optionsvirt-install command linekvm_stat/etc/xen/sled12.cfglibvirtCopyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
This manual offers an introduction to setting up and managing
virtualization with KVM (Kernel-based Virtual Machine), Xen, and
Linux Containers (LXC) on openSUSE Leap. The first part introduces the
different virtualization solutions by describing their requirements, their
installations and SUSE's support status. The second part deals with
managing VM Guests and VM Host Servers with libvirt. The following
parts describe various administration tasks and practices and the last
three parts deal with hypervisor-specific topics.
Documentation for our products is available at http://doc.opensuse.org/, where you can also find the latest updates, and browse or download the documentation in various formats.
In addition, the product documentation
is usually available in your installed system under
/usr/share/doc/manual.
The following documentation is available for this product:
This manual will see you through your initial contact with openSUSE® Leap. Check out the various parts of this manual to learn how to install, use and enjoy your system.
Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.
Describes virtualization technology in general, and introduces libvirt—the unified interface to virtualization—and detailed information on specific hypervisors.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.
An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.
Introduces the GNOME desktop of openSUSE Leap. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.
Several feedback channels are available:
To report bugs for openSUSE Leap, go to https://bugzilla.opensuse.org/, log in, and click .
For feedback on the documentation of this product, you can also send a
mail to doc-team@suse.com. Make sure to include the
document title, the product version and the publication date of the
documentation. To report errors or suggest enhancements, provide a concise
description of the problem and refer to the respective section number and
page (or URL).
The following notices and typographical conventions are used in this documentation:
/etc/passwd: directory names and file names
PLACEHOLDER: replace PLACEHOLDER with the actual value
PATH: the environment variable PATH
ls, --help: commands, options, and
parameters
user: users or groups
package name : name of a package
Alt, Alt–F1: a key to press or a key combination; keys are shown in uppercase as on a keyboard
, › : menu items, buttons
Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.
Commands that must be run with root privileges. Often you can also
prefix these commands with the sudo command to run them
as non-privileged user.
root #commandtux >sudocommand
Commands that can be run by non-privileged users.
tux >command
Notices
Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.
Important information you should be aware of before proceeding.
Additional information, for example about differences in software versions.
Helpful information, like a guideline or a piece of practical advice.
Virtualization is a technology that provides a way for a machine (Host) to run another operating system (guest virtual machines) on top of the host operating system.
This chapter introduces and explains the components and technologies you need to understand to set up and manage a Xen-based virtualization environment.
Linux containers are a lightweight virtualization method to run multiple virtual units (“containers”) simultaneously on a single host. This is similar to the chroot environment. Containers are isolated with kernel Control Groups (cgroups) and kernel Namespaces.
libvirt is a library that provides a common API for managing popular
virtualization solutions, among them KVM, LXC, and Xen. The library
provides a normalized management API for these virtualization solutions,
allowing a stable, cross-hypervisor interface for higher-level management
tools. The library also provides APIs for management of virtual networks
and storage on the VM Host Server. The configuration of each VM Guest is stored
in an XML file.
With libvirt you can also manage your VM Guests remotely. It supports
TLS encryption, x509 certificates and authentication with SASL. This
enables managing VM Host Servers centrally from a single workstation,
alleviating the need to access each VM Host Server individually.
Using the libvirt-based tools is the recommended way of managing
VM Guests. Interoperability between libvirt and libvirt-based
applications has been tested and is an essential part of SUSE's support
stance.
None of the virtualization tools is installed by default.
openSUSE Leap includes the latest open source virtualization technologies, Xen and KVM. With these hypervisors, openSUSE Leap can be used to provision, de-provision, install, monitor and manage multiple virtual machines (VM Guests) on a single physical system (for more information see Hypervisor).
Out of the box, openSUSE Leap can create virtual machines running both modified, highly tuned, paravirtualized operating systems and fully virtualized unmodified operating systems. Full virtualization allows the guest OS to run unmodified and requires the presence of AMD64/Intel 64 processors which supports either Intel* Virtualization Technology (Intel VT) or AMD* Virtualization (AMD-V)).
The primary component of the operating system that enables virtualization is a hypervisor (or virtual machine manager), which is a layer of software that runs directly on server hardware. It controls platform resources, sharing them among multiple VM Guests and their operating systems by presenting virtualized hardware interfaces to each VM Guest.
openSUSE is a Linux server operating system that offers two types of hypervisors: Xen and KVM. Both hypervisors support virtualization on the AMD64/Intel 64 architecture. For the POWER architecture KVM is supported. Both Xen and KVM support full virtualization mode. In addition, Xen supports paravirtualized mode. openSUSE Leap with Xen or KVM acts as a virtualization host server (VHS) that supports VM Guests with its own guest operating systems. The SUSE VM Guest architecture consists of a hypervisor and management components that constitute the VHS, which runs many application-hosting VM Guests.
In Xen, the management components run in a privileged VM Guest often called Dom0. In KVM, where the Linux kernel acts as the hypervisor, the management components run directly on the VHS.
Virtualization design provides many capabilities to your organization. Virtualization of operating systems is used in many computing areas:
Server consolidation: Many servers can be replaced by one big physical server, so hardware is consolidated, and Guest Operating Systems are converted to virtual machine. It provides the ability to run legacy software on new hardware.
Isolation: guest operating system can be fully isolated from the Host running it. So if the virtual machine is corrupted, the Host system is not harmed.
Migration: A process to move a running virtual machine to another physical machine. Live migration is an extended feature that allows this move without disconnection of the client or the application.
Disaster recovery: Virtualized guests are less dependent on the hardware, and the Host server provides snapshot features to be able to restore a known running system without any corruption.
Dynamic load balancing: A migration feature that brings a simple way to load-balance your service across your infrastructure.
Virtualization brings a lot of advantages while providing the same service as a hardware server.
First, it reduces the cost of your infrastructure. Servers are mainly used to provide a service to a customer, and a virtualized operating system can provide the same service, with:
Less hardware: You can run several operating system on one host, so all hardware maintenance will be reduced.
Less power/cooling: Less hardware means you do not need to invest more in electric power, backup power, and cooling if you need more service.
Save space: Your data center space will be saved because you do not need more hardware servers (less servers than service running).
Less management: Using a VM Guest simplifies the administration of your infrastructure.
Agility and productivity: Virtualization provides migration capabilities, live migration and snapshots. These features reduce downtime, and bring an easy way to move your service from one place to another without any service interruption.
Guest operating systems are hosted on virtual machines in either full virtualization (FV) mode or paravirtual (PV) mode. Each virtualization mode has advantages and disadvantages.
Full virtualization mode lets virtual machines run unmodified operating systems, such as Windows* Server 2003. It can use either Binary Translation or hardware-assisted virtualization technology, such as AMD* Virtualization or Intel* Virtualization Technology. Using hardware assistance allows for better performance on processors that support it.
To be able to run under paravirtual mode, guest operating systems usually need to be modified for the virtualization environment. However, operating systems running in paravirtual mode have better performance than those running under full virtualization.
Operating systems currently modified to run in paravirtual mode are called paravirtualized operating systems and include openSUSE Leap and NetWare® 6.5 SP8.
VM Guests not only share CPU and memory resources of the host system, but also the I/O subsystem. Because software I/O virtualization techniques deliver less performance than bare metal, hardware solutions that deliver almost “native” performance have been developed recently. openSUSE Leap supports the following I/O virtualization techniques:
Fully Virtualized (FV) drivers emulate widely supported real devices, which can be used with an existing driver in the VM Guest. The guest is also called Hardware Virtual Machine (HVM). Since the physical device on the VM Host Server may differ from the emulated one, the hypervisor needs to process all I/O operations before handing them over to the physical device. Therefore all I/O operations need to traverse two software layers, a process that not only significantly impacts I/O performance, but also consumes CPU time.
Paravirtualization (PV) allows direct communication between the hypervisor and the VM Guest. With less overhead involved, performance is much better than with full virtualization. However, paravirtualization requires either the guest operating system to be modified to support the paravirtualization API or paravirtualized drivers.
This type of virtualization enhances HVM (see Full Virtualization) with paravirtualized (PV) drivers, and PV interrupt and timer handling.
VFIO stands for Virtual Function I/O and is a new user-level driver framework for Linux. It replaces the traditional KVM PCI Pass-Through device assignment. The VFIO driver exposes direct device access to user space in a secure memory (IOMMU) protected environment. With VFIO, a VM Guest can directly access hardware devices on the VM Host Server (pass-through), avoiding performance issues caused by emulation in performance critical paths. This method does not allow to share devices—each device can only be assigned to a single VM Guest. VFIO needs to be supported by the VM Host Server CPU, chipset and the BIOS/EFI.
Compared to the legacy KVM PCI device assignment, VFIO has the following advantages:
Resource access is compatible with secure boot.
Device is isolated and its memory access protected.
Offers a user space device driver with more flexible device ownership model.
Is independent of KVM technology, and not bound to x86 architecture only.
As of openSUSE 42.2, the USB and PCI Pass-through methods of device assignment are considered deprecated and were superseded by the VFIO model.
The latest I/O virtualization technique, Single Root I/O Virtualization SR-IOV combines the benefits of the aforementioned techniques—performance and the ability to share a device with several VM Guests. SR-IOV requires special I/O devices, that are capable of replicating resources so they appear as multiple separate devices. Each such “pseudo” device can be directly used by a single guest. However, for network cards for example the number of concurrent queues that can be used is limited, potentially reducing performance for the VM Guest compared to paravirtualized drivers. On the VM Host Server, SR-IOV must be supported by the I/O device, the CPU and chipset, the BIOS/EFI and the hypervisor—for setup instructions see Section 13.10, “Assigning a Host PCI Device to a VM Guest”.
To be able to use the VFIO and SR-IOV features, the VM Host Server needs to fulfill the following requirements:
IOMMU needs to be enabled in the BIOS/EFI.
For Intel CPUs, the kernel parameter intel_iommu=on
needs to be provided on the kernel command line. For more information,
see Section 12.3.3.2, “ Tab”.
The VFIO infrastructure needs to be available. This can be achieved by
loading the kernel module
vfio_pci. For more information,
see Section 10.6.4, “Loading Kernel Modules”.
This chapter introduces and explains the components and technologies you need to understand to set up and manage a Xen-based virtualization environment.
The basic components of a Xen-based virtualization environment are the Xen hypervisor, the Dom0, any number of other VM Guests, and the tools, commands, and configuration files that let you manage virtualization. Collectively, the physical computer running all these components is called a VM Host Server because together these components form a platform for hosting virtual machines.
The Xen hypervisor, sometimes simply called a virtual machine monitor, is an open source software program that coordinates the low-level interaction between virtual machines and physical hardware.
The virtual machine host environment, also called Dom0 or controlling domain, is composed of several components, such as:
openSUSE Leap provides a graphical and a command line environment to manage the virtual machine host components and its virtual machines.
The term “Dom0” refers to a special domain that provides the management environment. This may be run either in graphical or in command line mode.
The xl tool stack based on the xenlight library (libxl). Use it to manage Xen guest domains.
QEMU—an open source software that emulates a full computer system, including a processor and various peripherals. It provides the ability to host operating systems in both full virtualization or paravirtualization mode.
A Xen-based virtual machine, also called a VM Guest or DomU, consists of the following components:
At least one virtual disk that contains a bootable operating system. The virtual disk can be based on a file, partition, volume, or other type of block device.
A configuration file for each guest domain. It is a text file
following the syntax described in the manual page man 5
xl.conf.
Several network devices, connected to the virtual network provided by the controlling domain.
There is a combination of GUI tools, commands, and configuration files to help you manage and customize your virtualization environment.
The following graphic depicts a virtual machine host with four virtual machines. The Xen hypervisor is shown as running directly on the physical hardware platform. Note that the controlling domain is also a virtual machine, although it has several additional management tasks compared to all the other virtual machines.
On the left, the virtual machine host’s Dom0 is shown running the openSUSE Leap operating system. The two virtual machines shown in the middle are running paravirtualized operating systems. The virtual machine on the right shows a fully virtual machine running an unmodified operating system, such as the latest version of Microsoft Windows/Server.
KVM is a full virtualization solution for the AMD64/Intel 64 and the z Systems architectures supporting hardware virtualization.
VM Guests (virtual machines), virtual storage, and virtual networks
can be managed with QEMU tools directly, or with the
libvirt-based stack. The QEMU tools include
qemu-system-ARCH, the QEMU monitor,
qemu-img, and qemu-ndb. A
libvirt-based stack includes libvirt itself, along with
libvirt-based applications such as virsh,
virt-manager, virt-install, and
virt-viewer.
This full virtualization solution consists of two main components:
A set of kernel modules
(kvm.ko, kvm-intel.ko,
and kvm-amd.ko) that provides the core
virtualization infrastructure and processor-specific drivers.
A user space program
(qemu-system-ARCH) that provides
emulation for virtual devices and control mechanisms to manage VM Guests
(virtual machines).
The term KVM more properly refers to the kernel level virtualization functionality, but is in practice more commonly used to refer to the user space component.
QEMU can provide certain Hyper-V hypercalls for Windows* guests to partly emulate a Hyper-V environment. This can be used to achieve better behavior for Windows* guests that are Hyper-V enabled.
Linux containers are a lightweight virtualization method to run multiple virtual units (“containers”) simultaneously on a single host. This is similar to the chroot environment. Containers are isolated with kernel Control Groups (cgroups) and kernel Namespaces.
Containers provide virtualization at the operating system level where the kernel controls the isolated containers. This is unlike full virtualization solutions like Xen or KVM where the processor simulates a complete hardware environment and controls virtual machines.
Conceptually, containers can be seen as an improved chroot technique. The difference is that a chroot environment separates only the file system, whereas containers go further and provide resource management and control via cgroups.
Isolating applications and operating systems through containers.
Providing nearly native performance as container manages allocation of resources in real-time.
Controlling network interfaces and applying resources inside containers through cgroups.
All containers run inside the host system's kernel and not with a different kernel.
Only allows Linux “guest” operating systems.
Security depends on the host system. Container is not secure. If you need a secure system, you can confine it using an AppArmor or SELinux profile.
libvirt is a library that provides a common API for managing popular
virtualization solutions, among them KVM, LXC, and Xen. The library
provides a normalized management API for these virtualization solutions,
allowing a stable, cross-hypervisor interface for higher-level management
tools. The library also provides APIs for management of virtual networks
and storage on the VM Host Server. The configuration of each VM Guest is stored
in an XML file.
With libvirt you can also manage your VM Guests remotely. It supports
TLS encryption, x509 certificates and authentication with SASL. This
enables managing VM Host Servers centrally from a single workstation,
alleviating the need to access each VM Host Server individually.
Using the libvirt-based tools is the recommended way of managing
VM Guests. Interoperability between libvirt and libvirt-based
applications has been tested and is an essential part of SUSE's support
stance.
The following libvirt-based tools for the command line are available on openSUSE Leap. All tools are provided by packages carrying the tool's name.
virsh
A command line tool to manage VM Guests with similar functionality
as the Virtual Machine Manager. Allows you to change a VM Guest's status (start,
stop, pause, etc.), to set up new guests and devices, or to edit
existing configurations. virsh is also useful to
script VM Guest management operations.
virsh takes the first argument as a
command and further arguments as options to this command:
virsh [-c URI] COMMAND DOMAIN-ID [OPTIONS]
Like zypper, virsh can also
be called without a command. In this case it starts a shell waiting for
your commands. This mode is useful when having to run subsequent
commands:
~> virsh -c qemu+ssh://wilber@mercury.example.com/system
Enter passphrase for key '/home/wilber/.ssh/id_rsa':
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # hostname
mercury.example.comvirt-install
A command line tool for creating new VM Guests using the
libvirt library. It supports graphical installations via VNC or
SPICE protocols. Given suitable
command line arguments, virt-install can run
completely unattended. This allows for easy automation of guest
installs. virt-install is the default installation
tool used by the Virtual Machine Manager.
The following libvirt-based graphical tools are available on openSUSE Leap. All tools are provided by packages carrying the tool's name.
virt-manager)
The Virtual Machine Manager is a desktop tool for managing VM Guests. It provides the
ability to control the life cycle of existing machines (start/shutdown,
pause/resume, save/restore) and create new VM Guests. It allows
managing various types of storage and virtual networks. It provides
access to the graphical console of VM Guests with a built-in VNC viewer
and can be used to view performance
statistics. virt-manager supports connecting to a
local libvirtd, managing a local VM Host Server, or a remote libvirtd
managing a remote VM Host Server.
To start the Virtual Machine Manager, enter virt-manager at the command
prompt.
To disable automatic USB device redirection for VM Guest using spice,
either launch virt-manager with the
--spice-disable-auto-usbredir parameter or run the
following command to persistently change the default behavior:
tux > dconf write /org/virt-manager/virt-manager/console/auto-redirect falsevirt-viewer
A viewer for the graphical console of a VM Guest. It uses SPICE
(configured by default on the VM Guest) or VNC protocols and supports
TLS and x509 certificates. VM Guests can be accessed by name, ID, or
UUID. If the guest is not already running, the viewer can be told to
wait until the guest starts, before attempting to connect to the
console. virt-viewer is not installed by default and
is available after installing the package virt-viewer.
To disable automatic USB device redirection for VM Guest using spice,
add an empty filter using the
--spice-usbredir-auto-redirect-filter='' parameter.
yast2 vmA YaST module that simplifies the installation of virtualization tools and can set up a network bridge:
None of the virtualization tools is installed by default.
To install KVM and KVM tools, proceed as follows:
Start YaST and choose › .
Select for a minimal installation of
QEMU tools. Select if a
libvirt-based management stack is also desired. Confirm with
.
To enable normal networking for the VM Guest, using a network bridge is recommended. YaST offers to automatically configure a bridge on the VM Host Server. Agree to do so by choosing , otherwise choose .
After the setup has been finished, you can start setting up VM Guests. Rebooting the VM Host Server is not required.
To install Xen and Xen tools, proceed as follows:
Start YaST and choose › .
Select for a minimal installation of
Xen tools. Select if a
libvirt-based management stack is also desired. Confirm with
.
To enable normal networking for the VM Guest, using a network bridge is recommended. YaST offers to automatically configure a bridge on the VM Host Server. Agree to do so by choosing , otherwise choose .
After the setup has been finished, you need to reboot the machine with the Xen kernel.
If everything works as expected, change the default boot kernel with YaST and make the Xen-enabled kernel the default. For more information about changing the default kernel, see Section 12.3, “Configuring the Boot Loader with YaST”.
To install containers, proceed as follows:
Start YaST and choose › .
Select and confirm with .
It is possible using Zypper and patterns to install virtualization
packages. Run the command zypper in -t pattern
PATTERN. Available patterns are:
kvm_server: sets up the
KVM VM Host Server with QEMU tools for management
kvm_tools: installs the
libvirt tools for managing and monitoring VM Guests
xen_server: sets up the
Xen VM Host Server with Xen tools for management
xen_tools: installs the
libvirt tools for managing and monitoring VM Guests
There is no pattern for containers; install the libvirt-daemon-lxc package.
libvirt #libvirtdThe communication between the virtualization solutions (KVM, Xen, LXC) and the libvirt API is managed by the daemon libvirtd. It needs to run on the VM Host Server. libvirt client applications such as virt-manager, possibly running on a remote machine, communicate with libvirtd running on the VM Hos…
A VM Guest consists of an image containing an operating system and data files and a configuration file describing the VM Guest's virtual hardware resources. VM Guests are hosted on and controlled by the VM Host Server. This section provides generalized instructions for installing a VM Guest.
Most management tasks, such as starting or stopping a VM Guest, can
either be done using the graphical application Virtual Machine Manager or on the command
line using virsh. Connecting to the graphical console
via VNC is only possible from a graphical user interface.
Managing several VM Host Servers, each hosting multiple VM Guests, quickly
becomes difficult. One benefit of libvirt is the ability to connect to
several VM Host Servers at once, providing a single interface to manage all
VM Guests and to connect to their graphical console.
When managing a VM Guest on the VM Host Server itself, you can access the complete file system of the VM Host Server to attach or create virtual hard disks or to attach existing images to the VM Guest. However, this is not possible when managing VM Guests from a remote host. For this reason, libvirt…
This chapter introduces common networking configurations supported by
libvirt. It does not depend on the hypervisor used. It is valid for all
hypervisors supported by libvirt, such as KVM or Xen. These setups
can be achieved using both the graphical interface of Virtual Machine Manager and the command
line tool virsh.
Virtual Machine Manager's view offers in-depth information about the VM Guest's complete configuration and hardware equipment. Using this view, you can also change the guest configuration or add and modify virtual hardware. To access this view, open the guest's console in Virtual Machine Manager and either choose › from the menu, or click in the toolbar.
libvirtd #
The communication between the virtualization solutions (KVM, Xen, LXC)
and the libvirt API is managed by the daemon libvirtd. It needs to run
on the VM Host Server. libvirt client applications such as virt-manager, possibly
running on a remote machine, communicate with libvirtd running on the
VM Host Server, which services the request using native hypervisor APIs. Use the
following commands to start and stop libvirtd or check its status:
tux >sudosystemctl start libvirtdtux >sudosystemctl status libvirtd libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Mon 2014-05-12 08:49:40 EDT; 2s ago [...]tux >sudosystemctl stop libvirtdtux >sudosystemctl status libvirtd [...] Active: inactive (dead) since Mon 2014-05-12 08:51:11 EDT; 4s ago [...]
To automatically start libvirtd at boot time, either activate it using the
YaST module or by entering the following
command:
tux >sudosystemctl enable libvirtd
libvirtd
and xendomains
If libvirtd fails to start,
check if the service xendomains is
loaded:
tux > systemctl is-active xendomains
active
If the command returns active, you need to stop
xendomains before you can
start the libvirtd daemon. If
you want libvirtd to also start
after rebooting, additionally prevent xendomains from starting automatically. Disable
the service:
tux >sudosystemctl stop xendomainstux >sudosystemctl disable xendomainstux >sudosystemctl start libvirtd
xendomains and libvirtd provide the same service and when used
in parallel may interfere with one another. As an example, xendomains may attempt to start a domU already
started by libvirtd.
A VM Guest consists of an image containing an operating system and data files and a configuration file describing the VM Guest's virtual hardware resources. VM Guests are hosted on and controlled by the VM Host Server. This section provides generalized instructions for installing a VM Guest.
Virtual machines have few if any requirements above those required to run the operating system. If the operating system has not been optimized for the virtual machine host environment, it can only run on hardware-assisted virtualization computer hardware, in full virtualization mode, and requires specific device drivers to be loaded. The hardware that is presented to the VM Guest depends on the configuration of the host.
You should be aware of any licensing issues related to running a single licensed copy of an operating system on multiple virtual machines. Consult the operating system license agreement for more information.
The wizard helps you through the steps required to create a virtual machine and install its operating system. There are two ways to start it: Within Virtual Machine Manager, either click or choose › . Alternatively, start YaST and choose › .
Start the wizard either from YaST or Virtual Machine Manager.
Choose an installation source—either a locally available media or a network installation source. If you want to set up your VM Guest from an existing image, choose .
On a VM Host Server running the Xen hypervisor, you can choose whether to install a paravirtualized or a fully virtualized guest. The respective option is available under . Depending on this choice, not all installation options may be available.
Depending on your choice in the previous step, you need to provide the following data:
Specify the path on the VM Host Server to an iso image containing the installation data. If it is available as a volume in a libvirt storage pool, you can also select it using . For more information, see Chapter 11, Managing Storage.
Alternatively, choose a physical CD-ROM or DVD inserted in the optical drive of the VM Host Server.
Provide the pointing to the installation source.
Valid URL prefixes are, for example, ftp://,
http://, https://, and
nfs://.
Under , provide a path to an auto-installation file (AutoYaST or Kickstart, for example) and kernel parameters. Having provided a URL, the operating system should be automatically detected correctly. If this is not the case, deselect and manually select the and .
When booting via PXE, you only need to provide the and the .
To set up the VM Guest from an existing image, you need to specify the path on the VM Host Server to the image. If it is available as a volume in a libvirt storage pool, you can also select it using . For more information, see Chapter 11, Managing Storage.
Choose the memory size and number of CPUs for the new virtual machine.
This step is omitted when is chosen in the first step.
Set up a virtual hard disk for the VM Guest. Either create a new disk
image or choose an existing one from a storage pool (for more information,
see Chapter 11, Managing Storage). If you choose to create a
disk, a qcow2 image will be created. By default, it is
stored under /var/lib/libvirt/images.
Setting up a disk is optional. If you are running a live system directly from CD or DVD, for example, you can omit this step by deactivating .
On the last screen of the wizard, specify the name for the virtual machine. To be offered the possibility to review and make changes to the virtualized hardware selection, activate . Find options to specify the network device under .
Click .
(Optional) If you kept the defaults in the previous step, the installation will now start. If you selected , a VM Guest configuration dialog opens. For more information about configuring VM Guests, see Chapter 13, Configuring Virtual Machines.
When you are done configuring, click .
The installation starts in a Virtual Machine Manager console window. Some key combinations, such as Ctrl–Alt–F1, are recognized by the VM Host Server but are not passed to the virtual machine. To bypass the VM Host Server, Virtual Machine Manager provides the “sticky key” functionality. Pressing Ctrl, Alt, or Shift three times makes the key sticky, then you can press the remaining keys to pass the combination to the virtual machine.
For example, to pass Ctrl–Alt–F2 to a Linux virtual machine, press Ctrl three times, then press Alt–F2. You can also press Alt three times, then press Ctrl–F2.
The sticky key functionality is available in the Virtual Machine Manager during and after installing a VM Guest.
virt-install #
virt-install is a command line tool that helps you create
new virtual machines using the libvirt library. It is useful if you cannot
use the graphical user interface, or need to automatize the process of
creating virtual machines.
virt-install is a complex script with a lot of command
line switches. The following are required. For more information, see the man
page of virt-install (1).
--name VM_GUEST_NAME:
Specify the name of the new virtual machine. The name must be unique
across all guests known to the hypervisor on the same connection. It is
used to create and name the guest’s configuration file and you can
access the guest with this name from virsh.
Alphanumeric and _-.:+ characters are allowed.
--memory REQUIRED_MEMORY:
Specify the amount of memory to allocate for the new virtual machine in
megabytes.
--vcpus NUMBER_OF_CPUS:
Specify the number of virtual CPUs. For best performance, the number of
virtual processors should be less than or equal to the number of
physical processors.
--paravirt: Set up a paravirtualized guest. This is
the default if the VM Host Server supports paravirtualization and full
virtualization.
--hvm: Set up a fully virtualized guest.
--virt-type HYPERVISOR:
Specify the hypervisor. Supported values are kvm,
xen, or lxc.
Specify one of --disk, --filesystem or
--nodisks the type of the storage for the new virtual
machine. For example, --disk size=10 creates 10 GB disk
in the default image location for the hypervisor and uses it for the
VM Guest. --filesystem
/export/path/on/vmhost specifies the
directory on the VM Host Server to be exported to the guest. And
--nodisks sets up a VM Guest without a local storage
(good for Live CDs).
Specify the installation method using one of --location,
--cdrom, --pxe,
--import, or --boot .
Use the --graphics VALUE
option to specify how to access the installation. openSUSE Leap supports
the values vnc or none.
If using vnc virt-install tries to launch
virt-viewer. If it is not installed or cannot be run,
connect to the VM Guest manually with you preferred viewer. To
explicitly prevent virt-install from launching the
viewer use --noautoconsole. To define a password for
accessing the VNC session, use the following syntax: --graphics
vnc,password=PASSWORD.
In case you are using --graphics none, you can access
the VM Guest through operating system supported services, such as SSH or
VNC. Refer to the operating system installation manual on how to set up
these services in the installation system.
By default, the console is not enabled for new virtual machines installed
using virt-install. To enable it, use
--extra-args="console=ttyS0 textmode=1" as in the
following example:
tux > virt-install --virt-type kvm --name sles12 --memory 1024 \
--disk /var/lib/libvirt/images/disk1.qcow2 --os-variant sles12
--extra-args="console=ttyS0 textmode=1" --graphics none
After the installation finished, the
/etc/default/grub file in the VM image will be
updated with the console=ttyS0 option on the
GRUB_CMDLINE_LINUX_DEFAULT line.
virt-install command line #The following command line example creates a new SUSE Linux Enterprise Desktop 12 virtual machine with a virtio accelerated disk and network card. It creates a new 10 GB qcow2 disk image as a storage, the source installation media being the host CD-ROM drive. It will use VNC graphics, and it will auto-launch the graphical client.
tux > virt-install --connect qemu:///system --virt-type kvm --name sled12 \
--memory 1024 --disk size=10 --cdrom /dev/cdrom --graphics vnc \
--os-variant sled12tux > virt-install --connect xen:// --virt-type xen --name sled12 \
--memory 1024 --disk size=10 --cdrom /dev/cdrom --graphics vnc \
--os-variant sled12This section provides instructions for operations exceeding the scope of a normal installation, such as including modules and extensions packages.
Some operating systems such as openSUSE Leap offer to include add-on products in the installation process. In case the add-on product installation source is provided via network, no special VM Guest configuration is needed. If it is provided via CD/DVD or ISO image, it is necessary to provide the VM Guest installation system with both, the standard installation medium and an image for the add-on product.
In case you are using the GUI-based installation, select in the last step of the wizard and add the add-on product ISO image via › . Specify the path to the image and set the to .
If installing from the command line, you need to set up the virtual CD/DVD
drives with the --disk parameter rather than with
--cdrom. The device that is specified first is used for
booting. The following example will install SUSE Linux Enterprise Server 12 plus SDK:
tux > virt-install --name sles12+sdk --memory 1024 --disk size=10 \
--disk /virt/iso/SLES12.iso,device=cdrom \
--disk /virt/iso/SLES12_SDK.iso,device=cdrom \
--graphics vnc --os-variant sles12
Most management tasks, such as starting or stopping a VM Guest, can
either be done using the graphical application Virtual Machine Manager or on the command
line using virsh. Connecting to the graphical console
via VNC is only possible from a graphical user interface.
If started on a VM Host Server, the libvirt tools Virtual Machine Manager,
virsh, and virt-viewer can be used to
manage VM Guests on the host. However, it is also possible to manage
VM Guests on a remote VM Host Server. This requires configuring remote
access for libvirt on the host. For instructions, see
Chapter 10, Connecting and Authorizing.
To connect to such a remote host with Virtual Machine Manager, you need to set
up a connection as explained in
Section 10.2.2, “Managing Connections with Virtual Machine Manager”. If connecting to a
remote host using virsh or
virt-viewer, you need to specify a connection URI with
the parameter -c (for example, virsh -c
qemu+tls://saturn.example.com/system or virsh -c
xen+ssh://). The form of connection URI depends on the
connection type and the hypervisor—see
Section 10.2, “Connecting to a VM Host Server” for details.
Examples in this chapter are all listed without a connection URI.
The VM Guest listing shows all VM Guests managed by libvirt
on a VM Host Server.
The main window of the Virtual Machine Manager lists all VM Guests for each VM Host Server it is connected to. Each VM Guest entry contains the machine's name, its status (, , or ) displayed as an icon and literally, and a CPU usage bar.
virsh #
Use the command virsh list to get a
list of VM Guests:
tux > virsh listtux > virsh list --all
For more information and further options, see virsh help
list or man 1 virsh.
VM Guests can be accessed via a VNC connection (graphical console) or, if supported by the guest operating system, via a serial console.
Opening a graphical console to a VM Guest lets you interact with the machine like a physical host via a VNC connection. If accessing the VNC server requires authentication, you are prompted to enter a user name (if applicable) and a password.
When you click into the VNC console, the cursor is “grabbed” and cannot be used outside the console anymore. To release it, press Alt–Ctrl.
To prevent the console from grabbing the cursor and to enable seamless cursor movement, add a tablet input device to the VM Guest. See Section 13.5, “Enabling Seamless and Synchronized Mouse Pointer Movement” for more information.
Certain key combinations such as Ctrl–Alt–Del are
interpreted by the host system and are not passed to the VM Guest. To
pass such key combinations to a VM Guest, open the menu from the VNC window and choose the desired key
combination entry. The menu is only available
when using Virtual Machine Manager and virt-viewer. With Virtual Machine Manager, you can
alternatively use the “sticky key” feature as explained in
Tip: Passing Key Combinations to Virtual Machines.
Principally all VNC viewers can connect to the console of a
VM Guest. However, if you are using SASL authentication and/or TLS/SSL
connection to access the guest, the options are limited. Common VNC
viewers such as tightvnc or
tigervnc support neither SASL authentication nor
TLS/SSL. The only supported alternative to Virtual Machine Manager and
virt-viewer is Remmina (refer to Section 4.2, “Remmina: the Remote Desktop Client”.
In the Virtual Machine Manager, right-click a VM Guest entry.
Choose from the pop-up menu.
virt-viewer
#
virt-viewer is a simple VNC viewer with added
functionality for displaying VM Guest consoles. For example, it can be
started in “wait” mode, where it waits for a VM Guest to
start before it connects. It also supports automatically reconnecting to
a VM Guest that is rebooted.
virt-viewer addresses VM Guests by name, by ID or by
UUID. Use virsh list --all to get
this data.
To connect to a guest that is running or paused, use either the ID, UUID, or name. VM Guests that are shut off do not have an ID—you can only connect to them by UUID or name.
8
tux > virt-viewer 8sles12; the
connection window will open once the guest startstux > virt-viewer --wait sles12
With the --wait option, the connection will be
upheld even if the VM Guest is not running at the moment. When
the guest starts, the viewer will be launched.
For more information, see virt-viewer
--help or man 1 virt-viewer.
When using virt-viewer to open a connection to a
remote host via SSH, the SSH password needs to be entered twice. The
first time for authenticating with libvirt, the second time for
authenticating with the VNC server. The second password needs to be
provided on the command line where virt-viewer was started.
Accessing the graphical console of a virtual machine requires a graphical
environment on the client accessing the VM Guest.
As an alternative, virtual machines
managed with libvirt can also be accessed from the shell via the serial
console and virsh. To open a serial console to a
VM Guest named “sles12”, run the following command:
tux > virsh console sles12
virsh console takes two optional flags:
--safe ensures exclusive access to the console,
--force disconnects any existing sessions before
connecting. Both features need to be supported by the guest operating
system.
Being able to connect to a VM Guest via serial console requires that the guest operating system supports serial console access and is properly supported. Refer to the guest operating system manual for more information.
Serial console access in SUSE Linux Enterprise and openSUSE is disabled by default. To enable it, proceed as follows:
Launch the YaST Boot Loader module and switch to the tab. Add console=ttyS0 to the
field .
Launch the YaST Boot Loader module and select the boot entry for
which to activate serial console access. Choose
and add console=ttyS0 to
the field .
Additionally, edit
/etc/inittab and uncomment the line with the
following content:
#S0:12345:respawn:/sbin/agetty -L 9600 ttyS0 vt102
Starting, stopping or pausing a VM Guest can be done with either
Virtual Machine Manager or virsh. You can also configure a
VM Guest to be automatically started when booting the VM Host Server.
When shutting down a VM Guest, you may either shut it down gracefully, or force the shutdown. The latter is equivalent to pulling the power plug on a physical host and is only recommended if there are no alternatives. Forcing a shutdown may cause file system corruption and loss of data on the VM Guest.
To be able to perform a graceful shutdown, the VM Guest must be configured to support ACPI. If you have created the guest with the Virtual Machine Manager, ACPI should be available in the VM Guest.
Depending on the guest operating system, availability of ACPI may not be sufficient to perform a graceful shutdown. It is strongly recommended to test shutting down and rebooting a guest before using it in production. openSUSE or SUSE Linux Enterprise Desktop, for example, can require PolKit authorization for shutdown and reboot. Make sure this policy is turned off on all VM Guests.
If ACPI was enabled during a Windows XP/Windows Server 2003 guest installation, turning it on in the VM Guest configuration only is not sufficient. For more information, see:
Regardless of the VM Guest's configuration, a graceful shutdown is always possible from within the guest operating system.
Changing a VM Guest's state can be done either from Virtual Machine Manager's main window, or from a VNC window.
Right-click a VM Guest entry.
Choose , , or one of the from the pop-up menu.
Open a VNC Window as described in Section 9.2.1.1, “Opening a Graphical Console with Virtual Machine Manager”.
Choose , , or one of the options either from the toolbar or from the menu.
You can automatically start a guest when the VM Host Server boots. This feature is not enabled by default and needs to be enabled for each VM Guest individually. There is no way to activate it globally.
Double-click the VM Guest entry in Virtual Machine Manager to open its console.
Choose › to open the VM Guest configuration window.
Choose and check .
Save the new configuration with .
virsh #In the following examples, the state of a VM Guest named “sles12” is changed.
tux > virsh start sles12tux > virsh suspend sles12tux > virsh resume sles12tux > virsh reboot sles12tux > virsh shutdown sles12tux > virsh destroy sles12tux > virsh autostart sles12tux > virsh autostart --disable sles12Saving a VM Guest preserves the exact state of the guest’s memory. The operation is similar to hibernating a computer. A saved VM Guest can be quickly restored to its previously saved running condition.
When saved, the VM Guest is paused, its current memory state is saved to disk, and then the guest is stopped. The operation does not make a copy of any portion of the VM Guest’s virtual disk. The amount of time taken to save the virtual machine depends on the amount of memory allocated. When saved, a VM Guest’s memory is returned to the pool of memory available on the VM Host Server.
The restore operation loads a VM Guest’s previously saved memory state file and starts it. The guest is not booted but instead resumed at the point where it was previously saved. The operation is similar to coming out of hibernation.
The VM Guest is saved to a state file. Make sure there is enough space on the partition you are going to save to. For an estimation of the file size in megabytes to be expected, issue the following command on the guest:
tux > free -mh | awk '/^Mem:/ {print $3}'After using the save operation, do not boot or start the saved VM Guest. Doing so would cause the machine's virtual disk and the saved memory state to get out of synchronization. This can result in critical errors when restoring the guest.
To be able to work with a saved VM Guest again, use the restore operation.
If you used virsh to save a VM Guest, you cannot
restore it using Virtual Machine Manager. In this case, make sure to restore using
virsh.
raw,
qcow2, qed
Saving and restoring VM Guests is only possible if the
VM Guest is using a virtual disk of the type
raw (.img),
qcow2, or qed.
Open a VNC connection window to a VM Guest. Make sure the guest is running.
Choose › › .
Open a VNC connection window to a VM Guest. Make sure the guest is not running.
Choose › .
If the VM Guest was previously saved using Virtual Machine Manager, you will not be
offered an option to the guest. However, note the
caveats on machines saved with virsh outlined in
Warning: Always Restore Saved Guests.
virsh #
Save a running VM Guest with the command virsh
save and specify the file which it is saved to.
opensuse13
tux > virsh save opensuse13 /virtual/saves/opensuse13.vmsav37
tux > virsh save 37 /virtual/saves/opensuse13.vmsave
To restore a VM Guest, use virsh restore:
tux > virsh restore /virtual/saves/opensuse13.vmsaveVM Guest snapshots are snapshots of the complete virtual machine including the state of CPU, RAM, and the content of all writable disks. To use virtual machine snapshots, you must have at least one non-removable and writable block device using the qcow2 disk image format.
Snapshots are supported on KVM VM Host Servers only.
Snapshots let you restore the state of the machine at a particular point in time. This is for example useful to undo a faulty configuration or the installation of a lot of packages. It is also helpful for testing purposes, as it allows you to go back to a defined state at any time.
Snapshots can be taken either from running guests or from a guest currently not running. Taking a snapshot from a guest that is shut down ensures data integrity. To create a snapshot from a running system, be aware that the snapshot only captures the state of the disk(s), not the state of the memory. Therefore you need to ensure that:
All running programs have written their data to the disk. If you are unsure, terminate the application and/or stop the respective service.
Buffers have been written to disk. This can be achieved by running the
command sync on the VM Guest.
Starting a snapshot reverts the machine back to the state it was in when the snapshot was taken. Any changes written to the disk after that point in time will be lost when starting the snapshot.
Starting a snapshot will restore the machine to the state (shut off or running) it was in when the snapshot was taken. After starting a snapshot that was created while the VM Guest was shut off, you will need to boot it.
To open the snapshot management view in Virtual Machine Manager, open the VNC window as described in Section 9.2.1.1, “Opening a Graphical Console with Virtual Machine Manager”. Now either choose › or click in the toolbar.
The list of existing snapshots for the chosen VM Guest is displayed in the left-hand part of the window. The snapshot that was last started is marked with a green tick. The right-hand part of the window shows details of the snapshot currently marked in the list. These details include the snapshot's title and time stamp, the state of the VM Guest at the time the snapshot was taken and a description. Snapshots of running guests also include a screenshot. The can be changed directly from this view. Other snapshot data cannot be changed.
To take a new snapshot of a VM Guest, proceed as follows:
Shut down the VM Guest if you want to create a snapshot from a guest that is not running.
Click in the bottom left corner of the VNC window.
The window opens.
Provide a and, optionally, a description. The name cannot be changed after the snapshot has been taken. To be able to identify the snapshot later easily, use a “speaking name”.
Confirm with .
To delete a snapshot of a VM Guest, proceed as follows:
Click in the bottom left corner of the VNC window.
Confirm the deletion with .
To start a snapshot, proceed as follows:
Click in the bottom left corner of the VNC window.
Confirm the start with .
virsh #
To list all existing snapshots for a domain
(ADMIN_SERVER in the following), run the
snapshot-list command:
tux > virsh snapshot-list
Name Creation Time State
------------------------------------------------------------
Basic installation incl. SMT finished 2013-09-18 09:45:29 +0200 shutoff
Basic installation incl. SMT for CLOUD3 2013-12-11 15:11:05 +0100 shutoff
Basic installation incl. SMT for CLOUD3-HA 2014-03-24 13:44:03 +0100 shutoff
Basic installation incl. SMT for CLOUD4 2014-07-07 11:27:47 +0200 shutoff
Beta1 Running 2013-07-12 12:27:28 +0200 shutoff
Beta2 prepared 2013-07-12 17:00:44 +0200 shutoff
Beta2 running 2013-07-29 12:14:11 +0200 shutoff
Beta3 admin node deployed 2013-07-30 16:50:40 +0200 shutoff
Beta3 prepared 2013-07-30 17:07:35 +0200 shutoff
Beta3 running 2013-09-02 16:13:25 +0200 shutoff
Cloud2 GM running 2013-12-10 15:44:58 +0100 shutoff
CLOUD3 RC prepared 2013-12-20 15:30:19 +0100 shutoff
CLOUD3-HA Build 680 prepared 2014-03-24 14:20:37 +0100 shutoff
CLOUD3-HA Build 796 installed (zypper up) 2014-04-14 16:45:18 +0200 shutoff
GMC2 post Cloud install 2013-09-18 10:53:03 +0200 shutoff
GMC2 pre Cloud install 2013-09-18 10:31:17 +0200 shutoff
GMC2 prepared (incl. Add-On Installation) 2013-09-17 16:22:37 +0200 shutoff
GMC_pre prepared 2013-09-03 13:30:38 +0200 shutoff
OS + SMT + eth[01] 2013-06-14 16:17:24 +0200 shutoff
OS + SMT + Mirror + eth[01] 2013-07-30 15:50:16 +0200 shutoff
The snapshot that was last started is shown with the
snapshot-current command:
tux > virsh snapshot-current --name admin_server
Basic installation incl. SMT for CLOUD4
Details about a particular snapshot can be obtained by running the
snapshot-info command:
tux > virsh snapshot-info sles "Basic installation incl. SMT for CLOUD4"
Name: Basic installation incl. SMT for CLOUD4
Domain: admin_server
Current: yes
State: shutoff
Location: internal
Parent: Basic installation incl. SMT for CLOUD3-HA
Children: 0
Descendants: 0
Metadata: yes
To take a new snapshot of a VM Guest currently not running, use the
snapshot-create-as command as follows:
tux > virsh snapshot-create-as --domain admin_server1 --name "Snapshot 1"2 \
--description "First snapshot"3Domain name. Mandatory. | |
Name of the snapshot. It is recommended to use a “speaking name”, since that makes it easier to identify the snapshot. Mandatory. | |
Description for the snapshot. Optional. |
To take a snapshot of a running VM Guest, you need to specify the
--live parameter:
tux > virsh snapshot-create-as --domain admin_server --name "Snapshot 2" \
--description "First live snapshot" --live
Refer to the SNAPSHOT COMMANDS section in
man 1 virsh for more details.
To delete a snapshot of a VM Guest and restore the disk space it occupies,
use the snapshot-delete command:
tux > virsh snapshot-delete --domain admin_server --snapshotname "Snapshot 2"
To start a snapshot, use the snapshot-revert
command:
tux > virsh snapshot-revert --domain admin_server --snapshotname "Snapshot 1"
To start the current snapshot (the one the VM Guest was started
off), it is sufficient to use --current rather than
specifying the snapshot name:
tux > virsh snapshot-revert --domain admin_server --current
By default, deleting a VM Guest using virsh removes only
its XML configuration. Since attached storage is not deleted by default, you
can reuse it with another VM Guest. With Virtual Machine Manager, you can also delete a
guest's storage files as well—this will completely erase the guest.
In the Virtual Machine Manager, right-click a VM Guest entry.
From the context menu, choose .
A confirmation window opens. Clicking will permanently erase the VM Guest. The deletion is not recoverable.
You can also permanently delete the guest's virtual disk by activating . The deletion is not recoverable either.
virsh #To delete a VM Guest, it needs to be shut down first. It is not possible to delete a running guest. For information on shutting down, see Section 9.3, “Changing a VM Guest's State: Start, Stop, Pause”.
To delete a VM Guest with virsh, run
virsh undefine
VM_NAME.
tux > virsh undefine sles12There is no option to automatically delete the attached storage files. If they are managed by libvirt, delete them as described in Section 11.2.4, “Deleting Volumes from a Storage Pool”.
One of the major advantages of virtualization is that VM Guests are portable. When a VM Host Server needs to go down for maintenance, or when the host gets overloaded, the guests can easily be moved to another VM Host Server. KVM and Xen even support “live” migrations during which the VM Guest is constantly available.
To successfully migrate a VM Guest to another VM Host Server, the following requirements need to be met:
It is recommended that the source and destination systems have the same architecture. However, it is possible to migrate between hosts with AMD* and Intel* architectures.
Storage devices must be accessible from both machines (for example, via NFS or iSCSI) and must be configured as a storage pool on both machines. For more information, see Chapter 11, Managing Storage.
This is also true for CD-ROM or floppy images that are connected during the move. However, you can disconnect them prior to the move as described in Section 13.8, “Ejecting and Changing Floppy or CD/DVD-ROM Media with Virtual Machine Manager”.
libvirtd needs to run on both VM Host Servers and you must be able
to open a remote libvirt connection between the target and the
source host (or vice versa). Refer to
Section 10.3, “Configuring Remote Connections” for details.
If a firewall is running on the target host, ports need to be opened
to allow the migration. If you do not specify a port during the
migration process, libvirt chooses one from the range
49152:49215. Make sure that either this range (recommended) or a
dedicated port of your choice is opened in the firewall on the
target host.
Host and target machine should be in the same subnet on the network, otherwise networking will not work after the migration.
No running or paused VM Guest with the same name must exist on the target host. If a shut down machine with the same name exists, its configuration will be overwritten.
All CPU models except host cpu model are supported when migrating VM Guests.
SATA disk device type is not migratable.
File system pass-through feature is incompatible with migration.
The VM Host Server and VM Guest need to have proper timekeeping installed. See Chapter 15, VM Guest Clock Settings.
No physical devices can be passed from host to guest. Live migration is currently not supported when using devices with PCI pass-through or SR-IOV. If live migration needs to be supported, you need to use software virtualization (paravirtualization or full virtualization).
Cache mode setting is an important setting for migration. See: Section 14.5, “Effect of Cache Modes on Live Migration”.
The image directory should be located in the same path on both hosts.
When using the Virtual Machine Manager to migrate VM Guests, it does not matter on which machine it is started. You can start Virtual Machine Manager on the source or the target host or even on a third host. In the latter case you need to be able to open remote connections to both the target and the source host.
Start Virtual Machine Manager and establish a connection to the target or the source host. If the Virtual Machine Manager was started neither on the target nor the source host, connections to both hosts need to be opened.
Right-click the VM Guest that you want to migrate and choose . Make sure the guest is running or paused—it is not possible to migrate guests that are shut down.
To increase the speed of the migration somewhat, pause the VM Guest. This is the equivalent of the former so-called “offline migration” option of Virtual Machine Manager.
Choose a for the VM Guest. If the desired target host does not show up, make sure that you are connected to the host.
To change the default options for connecting to the remote host, under , set the , and the target host's (IP address or host name) and . If you specify a , you must also specify an .
Under , choose whether the move should be permanent (default) or temporary, using .
Additionally, there is the option , which
allows migrating without disabling the cache of the VM Host Server. This can
speed up the migration but only works when the current configuration
allows for a consistent view of the VM Guest storage without using
cache="none"/0_DIRECT.
In recent versions of Virtual Machine Manager, the option of setting a bandwidth for the
migration has been removed. To set a specific bandwidth, use
virsh instead.
To perform the migration, click .
When the migration is complete, the window closes and the VM Guest is now listed on the new host in the Virtual Machine Manager window. The original VM Guest will still be available on the target host (in shut down state).
virsh #
To migrate a VM Guest with virsh
migrate, you need to have direct or remote shell access
to the VM Host Server, because the command needs to be run on the host. The
migration command looks like this:
tux > virsh migrate [OPTIONS] VM_ID_or_NAME CONNECTION_URI [--migrateuri tcp://REMOTE_HOST:PORT]
The most important options are listed below. See virsh help
migrate for a full list.
--live
Does a live migration. If not specified, the guest will be paused during the migration (“offline migration”).
--suspend
Does an offline migration and does not restart the VM Guest on the target host.
--persistent
By default a migrated VM Guest will be migrated temporarily, so its configuration is automatically deleted on the target host if it is shut down. Use this switch to make the migration persistent.
--undefinesource
When specified, the VM Guest definition on the source host will be deleted after a successful migration (however, virtual disks attached to this guest will not be deleted).
The following examples use mercury.example.com as the source system and
jupiter.example.com as the target system; the VM Guest's name is
opensuse131 with Id 37.
tux > virsh migrate 37 qemu+ssh://tux@jupiter.example.com/systemtux > virsh migrate --live opensuse131 qemu+ssh://tux@jupiter.example.com/systemtux > virsh migrate --live --persistent --undefinesource 37 \
qemu+tls://tux@jupiter.example.com/systemtux > virsh migrate opensuse131 qemu+ssh://tux@jupiter.example.com/system \
--migrateuri tcp://@jupiter.example.com:49152
By default virsh migrate creates a temporary
(transient) copy of the VM Guest on the target host. A shut down
version of the original guest description remains on the source host. A
transient copy will be deleted from the server after it is shut down.
To create a permanent copy of a guest on the target host, use
the switch --persistent. A shut down version of the
original guest description remains on the source host, too. Use the
option --undefinesource together with
--persistent for a “real” move where a
permanent copy is created on the target host and the version on the
source host is deleted.
It is not recommended to use --undefinesource without
the --persistent option, since this will result in the
loss of both VM Guest definitions when the guest is shut down on
the target host.
First you need to export the storage, to share the Guest image between
host. This can be done by an NFS server. In the following example we
want to share the /volume1/VM directory for all
machines that are on the network 10.0.1.0/24. We will use a SUSE Linux Enterprise
NFS server. As root user, edit the /etc/exports
file and add:
/volume1/VM 10.0.1.0/24 (rw,sync,no_root_squash)
You need to restart the NFS server:
tux >sudosystemctl restart nfsservertux >sudoexportfs /volume1/VM 10.0.1.0/24
On each host where you want to migrate the VM Guest, the pool must
be defined to be able to access the volume (that contains the Guest
image). Our NFS server IP address is 10.0.1.99, its share is the
/volume1/VM directory, and we want to get it
mounted in the /var/lib/libvirt/images/VM
directory. The pool name will be VM. To define
this pool, create a VM.xml file with the following
content:
<pool type='netfs'>
<name>VM</name>
<source>
<host name='10.0.1.99'/>
<dir path='/volume1/VM'/>
<format type='auto'/>
</source>
<target>
<path>/var/lib/libvirt/images/VM</path>
<permissions>
<mode>0755</mode>
<owner>-1</owner>
<group>-1</group>
</permissions>
</target>
</pool>
Then load it into libvirt using the pool-define
command:
root # virsh pool-define VM.xml
An alternative way to define this pool is to use the
virsh command:
root # virsh pool-define-as VM --type netfs --source-host 10.0.1.99 \
--source-path /volume1/VM --target /var/lib/libvirt/images/VM
Pool VM created
The following commands assume that you are in the interactive shell of
virsh which can also be reached by using the command
virsh without any arguments.
Then the pool can be set to start automatically at host boot (autostart
option):
virsh # pool-autostart VM
Pool VM marked as autostartedIf you want to disable the autostart:
virsh # pool-autostart VM --disable
Pool VM unmarked as autostartedCheck if the pool is present:
virsh #pool-list --all Name State Autostart ------------------------------------------- default active yes VM active yesvirsh #pool-info VM Name: VM UUID: 42efe1b3-7eaa-4e24-a06a-ba7c9ee29741 State: running Persistent: yes Autostart: yes Capacity: 2,68 TiB Allocation: 2,38 TiB Available: 306,05 GiB
Remember: this pool must be defined on each host where you want to be able to migrate your VM Guest.
The pool has been defined—now we need a volume which will contain the disk image:
virsh # vol-create-as VM sled12.qcow12 8G --format qcow2
Vol sled12.qcow12 createdThe volume names shown will be used later to install the guest with virt-install.
Let's create a openSUSE Leap VM Guest with the
virt-install command. The VM
pool will be specified with the --disk option,
cache=none is recommended if you do not want to use
the --unsafe option while doing the migration.
root # virt-install --connect qemu:///system --virt-type kvm --name \
sled12 --memory 1024 --disk vol=VM/sled12.qcow2,cache=none --cdrom \
/mnt/install/ISO/SLE-12-Desktop-DVD-x86_64-Build0327-Media1.iso --graphics \
vnc --os-variant sled12
Starting install...
Creating domain...
Everything is ready to do the migration now. Run the
migrate command on the VM Host Server that is currently
hosting the VM Guest, and choose the destination.
virsh # migrate --live sled12 --verbose qemu+ssh://IP/Hostname/system Password: Migration: [ 12 %]
After starting Virtual Machine Manager and connecting to the VM Host Server, a CPU usage graph of all the running guests is displayed.
It is also possible to get information about disk and network usage with this tool, however, you must first activate this in :
Run virt-manager.
Select › .
Change the tab from to .
Activate the check boxes for the kind of activity you want to see: , , and .
If desired, also change the update interval using .
Close the dialog.
Activate the graphs that should be displayed under › .
Afterward, the disk and network statistics are also displayed in the main window of the Virtual Machine Manager.
More precise data is available from the VNC window. Open a VNC window as described in Section 9.2.1, “Opening a Graphical Console”. Choose from the toolbar or the menu. The statistics are displayed from the entry of the left-hand tree menu.
virt-top #
virt-top is a command line tool similar to the
well-known process monitoring tool
top. virt-top uses libvirt and
therefore is capable of showing statistics for VM Guests running on
different hypervisors. It is recommended to use
virt-top instead of hypervisor-specific tools like
xentop.
By default virt-top shows statistics for all running
VM Guests. Among the data that is displayed is the percentage of memory
used (%MEM) and CPU (%CPU) and the
uptime of the guest (TIME). The data is updated
regularly (every three seconds by default). The following shows the output
on a VM Host Server with seven VM Guests, four of them inactive:
virt-top 13:40:19 - x86_64 8/8CPU 1283MHz 16067MB 7.6% 0.5%
7 domains, 3 active, 3 running, 0 sleeping, 0 paused, 4 inactive D:0 O:0 X:0
CPU: 6.1% Mem: 3072 MB (3072 MB by guests)
ID S RDRQ WRRQ RXBY TXBY %CPU %MEM TIME NAME
7 R 123 1 18K 196 5.8 6.0 0:24.35 sled12_sp1
6 R 1 0 18K 0 0.2 6.0 0:42.51 sles12_sp1
5 R 0 0 18K 0 0.1 6.0 85:45.67 opensuse_leap
- (Ubuntu_1410)
- (debian_780)
- (fedora_21)
- (sles11sp3)By default the output is sorted by ID. Use the following key combinations to change the sort field:
| Shift–P: CPU usage |
| Shift–M: Total memory allocated by the guest |
| Shift–T: Time |
| Shift–I: ID |
To use any other field for sorting, press Shift–F and select a field from the list. To toggle the sort order, use Shift–R.
virt-top also supports different views on the
VM Guests data, which can be changed on-the-fly by pressing the following
keys:
| 0: default view |
| 1: show physical CPUs |
| 2: show network interfaces |
| 3: show virtual disks |
virt-top supports more hot keys to change the view of
the data and many command line switches that affect the behavior of
the program. For more information, see man 1 virt-top.
kvm_stat #
kvm_stat can be used to trace KVM performance
events. It monitors /sys/kernel/debug/kvm, so it
needs the debugfs to be mounted. On openSUSE Leap it should be
mounted by default. In case it is not mounted, use the following
command:
tux >sudomount -t debugfs none /sys/kernel/debug
kvm_stat can be used in three different modes:
kvm_stat # update in 1 second intervals
kvm_stat -1 # 1 second snapshot
kvm_stat -l > kvmstats.log # update in 1 second intervals in log format
# can be imported to a spreadsheetkvm_stat #kvm statistics efer_reload 0 0 exits 11378946 218130 fpu_reload 62144 152 halt_exits 414866 100 halt_wakeup 260358 50 host_state_reload 539650 249 hypercalls 0 0 insn_emulation 6227331 173067 insn_emulation_fail 0 0 invlpg 227281 47 io_exits 113148 18 irq_exits 168474 127 irq_injections 482804 123 irq_window 51270 18 largepages 0 0 mmio_exits 6925 0 mmu_cache_miss 71820 19 mmu_flooded 35420 9 mmu_pde_zapped 64763 20 mmu_pte_updated 0 0 mmu_pte_write 213782 29 mmu_recycled 0 0 mmu_shadow_zapped 128690 17 mmu_unsync 46 -1 nmi_injections 0 0 nmi_window 0 0 pf_fixed 1553821 857 pf_guest 1018832 562 remote_tlb_flush 174007 37 request_irq 0 0 signal_exits 0 0 tlb_flush 394182 148
?????
See http://clalance.blogspot.com/2009/01/kvm-performance-tools.html for further information on how to interpret these values.
Managing several VM Host Servers, each hosting multiple VM Guests, quickly
becomes difficult. One benefit of libvirt is the ability to connect to
several VM Host Servers at once, providing a single interface to manage all
VM Guests and to connect to their graphical console.
To ensure only authorized users can connect, libvirt offers
several connection types (via TLS, SSH, Unix sockets, and TCP) that can be
combined with different authorization mechanisms (socket, PolKit, SASL
and Kerberos).
The power to manage VM Guests and to access their graphical console is something that should be restricted to a well defined circle of persons. To achieve this goal, you can use the following authentication techniques on the VM Host Server:
Access control for Unix sockets with permissions and group ownership.
This method is available for libvirtd connections only.
Access control for Unix sockets with PolKit. This method is available
for local libvirtd connections only.
User name and password authentication with SASL (Simple Authentication
and Security Layer). This method is available for both, libvirtd
and VNC connections. Using SASL does not require real user accounts on
the server, since it uses its own database to store user names and
passwords. Connections authenticated with SASL are encrypted.
Kerberos authentication. This method, available for libvirtd
connections only, is not covered in this manual. Refer to
http://libvirt.org/auth.html#ACL_server_kerberos
for details.
Single password authentication. This method is available for VNC connections only.
libvirtd and VNC need to be configured separately
Access to the VM Guest's management functions (via libvirtd) on
the one hand, and to its graphical console on the other hand, always
needs to be configured separately. When restricting access to the
management tools, these restrictions do not
automatically apply to VNC connections!
When accessing VM Guests from remote via TLS/SSL connections, access can be indirectly controlled on each client by restricting read permissions to the certificate's key file to a certain group. See Section 10.3.2.5, “Restricting Access (Security Considerations)” for details.
libvirtd Authentication #
libvirtd authentication is configured in
/etc/libvirt/libvirtd.conf. The configuration made
here applies to all libvirt tools such as the Virtual Machine Manager or
virsh.
libvirt offers two sockets: a read-only socket for monitoring
purposes and a read-write socket to be used for management operations.
Access to both sockets can be configured independently. By default, both
sockets are owned by root.root. Default access
permissions on the read-write socket are restricted to the user
root (0700) and fully open on the read-only
socket (0777).
In the following instructions, you will learn how to configure access permissions for the read-write socket. The same instructions also apply to the read-only socket. All configuration steps need to be carried out on the VM Host Server.
The default authentication method on openSUSE Leap is access control
for Unix sockets. Only the user root may authenticate. When
accessing the libvirt tools as a non-root user directly on the
VM Host Server, you need to provide the root password through
PolKit once. You are then granted access for the current and for future
sessions.
Alternatively, you can configure libvirt to allow
“system” access to non-privileged users. See
Section 10.2.1, ““system” Access for Non-Privileged Users” for details.
| Section 10.1.1.2, “Local Access Control for Unix Sockets with PolKit” |
| Section 10.1.1.1, “Access Control for Unix Sockets with Permissions and Group Ownership” |
| Section 10.1.1.1, “Access Control for Unix Sockets with Permissions and Group Ownership” |
| Section 10.1.1.3, “User name and Password Authentication with SASL” |
| none (access controlled on the client side by restricting access to the certificates) |
To grant access for non-root accounts, configure the
sockets to be owned and accessible by a certain group
(libvirt in the following
example). This authentication method can be used for local and remote
SSH connections.
In case it does not exist, create the group that should own the socket:
tux >sudogroupadd libvirt
The group must exist prior to restarting libvirtd. If not, the
restart will fail.
Add the desired users to the group:
tux >sudousermod --append --groups libvirt tux
Change the configuration in
/etc/libvirt/libvirtd.conf as follows:
unix_sock_group = "libvirt"1 unix_sock_rw_perms = "0770"2 auth_unix_rw = "none"3
Restart libvirtd:
tux >sudosystemctl start libvirtd
Access control for Unix sockets with PolKit is the default
authentication method on openSUSE Leap for non-remote connections.
Therefore, no libvirt configuration changes are needed. With
PolKit authorization enabled, permissions on both sockets default to
0777 and each application trying to access a socket
needs to authenticate via PolKit.
Authentication with PolKit can only be used for local connections on the VM Host Server itself, since PolKit does not handle remote authentication.
Two policies for accessing libvirt's sockets exist:
org.libvirt.unix.monitor: accessing the read-only socket
org.libvirt.unix.manage: accessing the read-write socket
By default, the policy for accessing the read-write socket is to
authenticate with the root password once and grant the
privilege for the current and for future sessions.
To grant users access to a socket without having to provide
the root password, you need to create a rule in
/etc/polkit-1/rules.d. Create the file
/etc/polkit-1/rules.d/10-grant-libvirt with the
following content to grant access to the read-write socket to all
members of the group
libvirt:
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" && subject.isInGroup("libvirt")) {
return polkit.Result.YES;
}
});SASL provides user name and password authentication and data encryption (digest-md5, by default). Since SASL maintains its own user database, the users do not need to exist on the VM Host Server. SASL is required by TCP connections and on top of TLS/SSL connections.
Using digest-md5 encryption on an otherwise not encrypted TCP connection does not provide enough security for production environments. It is recommended to only use it in testing environments.
Access from remote TLS/SSL connections can be indirectly controlled on the client side by restricting access to the certificate's key file. However, this might prove error-prone when dealing with many clients. Using SASL with TLS adds security by additionally controlling access on the server side.
To configure SASL authentication, proceed as follows:
Change the configuration in
/etc/libvirt/libvirtd.conf as follows:
To enable SASL for TCP connections:
auth_tcp = "sasl"
To enable SASL for TLS/SSL connections:
auth_tls = "sasl"
Restart libvirtd:
tux >sudosystemctl restart libvirtd
The libvirt SASL configuration file is located at
/etc/sasl2/libvirtd.conf. Normally, there is no
need to change the defaults. However, if using SASL on top of TLS,
you may turn off session encryption to avoid additional overhead (TLS
connections are already encrypted) by commenting the line setting the
mech_list parameter. Only do this for TLS/SASL, for
TCP connections this parameter must be set to digest-md5.
#mech_list: digest-md5
By default, no SASL users are configured, so no logins are possible. Use the following commands to manage users:
tuxsaslpasswd2 -a libvirt tux
tuxsaslpasswd2 -a libvirt -d tux
sasldblistusers2 -f /etc/libvirt/passwd.db
virsh and SASL Authentication
When using SASL authentication, you will be prompted for a user name
and password every time you issue a virsh command.
Avoid this by using virsh in shell mode.
Since access to the graphical console of a VM Guest is not
controlled by libvirt, but rather by the specific hypervisor, it is
always necessary to additionally configure VNC authentication. The main
configuration file is
/etc/libvirt/<hypervisor>.conf. This
section describes the QEMU/KVM hypervisor, so the target
configuration file is /etc/libvirt/qemu.conf.
In contrast to KVM and LXC, Xen does not yet offer more
sophisticated VNC authentication than setting a password on a per VM
basis. See the <graphics type='vnc'...
libvirt configuration option below.
Two authentication types are available: SASL and single password
authentication. If you are using SASL for libvirt authentication,
it is strongly recommended to use it for VNC authentication as
well—it is possible to share the same database.
A third method to restrict access to the VM Guest is to enable the use of TLS encryption on the VNC server. This requires the VNC clients to have access to x509 client certificates. By restricting access to these certificates, access can indirectly be controlled on the client side. Refer to Section 10.3.2.4.2, “VNC over TLS/SSL: Client Configuration” for details.
SASL provides user name and password authentication and data
encryption. Since SASL maintains its own user database, the users do
not need to exist on the VM Host Server. As with SASL authentication for
libvirt, you may use SASL on top of TLS/SSL connections. Refer to
Section 10.3.2.4.2, “VNC over TLS/SSL: Client Configuration” for details
on configuring these connections.
To configure SASL authentication for VNC, proceed as follows:
Create a SASL configuration file. It is recommended to use the
existing libvirt file. If you have already configured SASL for
libvirt and are planning to use the same settings including the
same user name and password database, a simple link is suitable:
tux >sudoln -s /etc/sasl2/libvirt.conf /etc/sasl2/qemu.conf
If are setting up SASL for VNC only or you are planning to use a
different configuration than for libvirt, copy the existing file
to use as a template:
tux >sudocp /etc/sasl2/libvirt.conf /etc/sasl2/qemu.conf
Then edit it according to your needs.
Change the configuration in
/etc/libvirt/qemu.conf as follows:
vnc_listen = "0.0.0.0" vnc_sasl = 1 sasldb_path: /etc/libvirt/qemu_passwd.db
The first parameter enables VNC to listen on all public interfaces (rather than to the local host only), and the second parameter enables SASL authentication.
By default, no SASL users are configured, so no logins are possible. Use the following commands to manage users:
tuxtux > saslpasswd2 -f /etc/libvirt/qemu_passwd.db -a qemu tuxtuxtux > saslpasswd2 -f /etc/libvirt/qemu_passwd.db -a qemu -d tuxtux > sasldblistusers2 -f /etc/libvirt/qemu_passwd.db
Restart libvirtd:
tux >sudosystemctl restart libvirtd
Restart all VM Guests that have been running prior to changing the configuration. VM Guests that have not been restarted will not use SASL authentication for VNC connects.
SASL authentication is currently supported by Virtual Machine Manager and
virt-viewer.
Both of these viewers also support TLS/SSL connections.
Access to the VNC server may also be controlled by setting a VNC password. You can either set a global password for all VM Guests or set individual passwords for each guest. The latter requires to edit the VM Guest's configuration files.
If you are using single password authentication, it is good practice to set a global password even if setting passwords for each VM Guest. This will always leave your virtual machines protected with a “fallback” password if you forget to set a per-machine password. The global password will only be used if no other password is set for the machine.
Change the configuration in
/etc/libvirt/qemu.conf as follows:
vnc_listen = "0.0.0.0" vnc_password = "PASSWORD"
The first parameter enables VNC to listen on all public interfaces (rather than to the local host only), and the second parameter sets the password. The maximum length of the password is eight characters.
Restart libvirtd:
tux >sudosystemctl restart libvirtd
Restart all VM Guests that have been running prior to changing the configuration. VM Guests that have not been restarted will not use password authentication for VNC connects.
Change the configuration in
/etc/libvirt/qemu.conf as follows to enable VNC
to listen on all public interfaces (rather than to the local host
only).
vnc_listen = "0.0.0.0"
Open the VM Guest's XML configuration file in an editor. Replace
VM NAME in the following example with the
name of the VM Guest. The editor that is used defaults to
$EDITOR. If that variable is not set,
vi is used.
tux > virsh edit VM NAME
Search for the element <graphics> with
the attribute type='vnc', for example:
<graphics type='vnc' port='-1' autoport='yes'/>
Add the passwd=PASSWORD
attribute, save the file and exit the editor. The maximum length of
the password is eight characters.
<graphics type='vnc' port='-1' autoport='yes' passwd='PASSWORD'/>
Restart libvirtd:
tux >sudosystemctl restart libvirtd
Restart all VM Guests that have been running prior to changing the configuration. VM Guests that have not been restarted will not use password authentication for VNC connects.
The VNC protocol is not considered to be safe. Although the password is
sent encrypted, it might be vulnerable when an attacker can sniff both the
encrypted password and the encryption key. Therefore, it is recommended to
use VNC with TLS/SSL or tunneled over SSH.
virt-viewer, Virtual Machine Manager and Remmina (refer to Section 4.2, “Remmina: the Remote Desktop Client”)support both methods.
To connect to a hypervisor with libvirt, you need to
specify a uniform resource identifier (URI). This URI is needed with
virsh and virt-viewer (except when
working as root on the VM Host Server) and is optional for the
Virtual Machine Manager. Although the latter can be called with a connection parameter
(for example, virt-manager -c qemu:///system), it also
offers a graphical interface to create connection URIs. See
Section 10.2.2, “Managing Connections with Virtual Machine Manager” for details.
HYPERVISOR1+PROTOCOL2://USER@REMOTE3/CONNECTION_TYPE4
Specify the hypervisor. openSUSE Leap currently supports the
following hypervisors: | |
When connecting to a remote host, specify the protocol here. It can be
one of: | |
When connecting to a remote host, specify the user name and the remote host
name. If no user name is specified, the user name that has called the
command ( | |
When connecting to the |
test:///default
Connect to the local dummy hypervisor. Useful for testing.
qemu:///system or xen:///system
Connect to the QEMU/Xen hypervisor on the local host having full access (type system).
qemu+ssh://tux@mercury.example.com/system or
xen+ssh://tux@mercury.example.com/system
Connect to the QEMU/Xen hypervisor on the remote host mercury.example.com. The connection is established via an SSH tunnel.
qemu+tls://saturn.example.com/system or xen+tls://saturn.example.com/system
Connect to the QEMU/Xen hypervisor on the remote host mercury.example.com. The connection is established using TLS/SSL.
For more details and examples, refer to the libvirt documentation at
http://libvirt.org/uri.html.
A user name needs to be specified when using Unix socket authentication (regardless of whether using the user/password authentication scheme or PolKit). This applies to all SSH and local connections.
There is no need to specify a user name when using SASL authentication (for TCP or TLS connections) or when doing no additional server-side authentication for TLS connections. With SASL the user name will not be evaluated—you will be prompted for an SASL user/password combination in any case.
As mentioned above, a connection to the QEMU hypervisor can be
established using two different protocols: session
and system. A “session” connection is
spawned with the same privileges as the client program. Such a
connection is intended for desktop virtualization, since it is
restricted (for example no USB/PCI device assignments, no virtual
network setup, limited remote access to libvirtd).
The “system” connection intended for server virtualization
has no functional restrictions but is, by default, only accessible by
root. However, with the addition of the DAC (Discretionary
Access Control) driver to libvirt it is now possible to grant
non-privileged users “system” access. To grant
“system” access to the user tux, proceed as
follows:
Enable access via Unix sockets as described in Section 10.1.1.1, “Access Control for Unix Sockets with Permissions and Group Ownership”. In that
example access to libvirt is granted to all members of the group
libvirt and tux
made a member of this group. This ensures that tux can connect
using virsh or Virtual Machine Manager.
Edit /etc/libvirt/qemu.conf and change the
configuration as follows:
user = "tux" group = "libvirt" dynamic_ownership = 1
This ensures that the VM Guests are started by tux and that
resources bound to the guest (for example virtual disks) can be accessed
and modified by tux.
Make tux a member of the group kvm:
tux >sudousermod --append --groups kvm tux
This step is needed to grant access to /dev/kvm,
which is required to start VM Guests.
Restart libvirtd:
tux >sudosystemctl restart libvirtd
The Virtual Machine Manager uses a Connection for every VM Host Server
it manages. Each connection contains all VM Guests on the respective
host. By default, a connection to the local host is already configured
and connected.
All configured connections are displayed in the Virtual Machine Manager main window. Active connections are marked with a small triangle, which you can click to fold or unfold the list of VM Guests for this connection.
Inactive connections are listed gray and are marked with Not
Connected. Either double-click or right-click it and choose
from the context menu. You can also
an existing connection from this menu.
It is not possible to edit an existing connection. To change a connection, create a new one with the desired parameters and delete the “old” one.
To add a new connection in the Virtual Machine Manager, proceed as follows:
Choose ›
Choose the host's ( or )
(Optional) To set up a remote connection, choose . For more information, see Section 10.3, “Configuring Remote Connections”.
In case of a remote connection, specify the
of the remote machine in the format
USERNAME@REMOTE _HOST.
There is no need to specify a user name for TCP and TLS connections: In
these cases, it will not be evaluated. However, in the case of SSH
connections, specifying a user name is necessary when you want to
connect as a user other than
root.
If you do not want the connection to be automatically started when starting the Virtual Machine Manager, deactivate .
Finish the configuration by clicking .
A major benefit of libvirt is the ability to manage VM Guests on
different remote hosts from a central location. This section gives
detailed instructions on how to configure server and client to allow
remote connections.
qemu+ssh or xen+ssh) #
Enabling a remote connection that is tunneled over SSH on the
VM Host Server only requires the ability to accept SSH connections. Make
sure the SSH daemon is started (systemctl status
sshd) and that the ports for service
SSH are opened in the firewall.
User authentication for SSH connections can be done using traditional
file user/group ownership and permissions as described in
Section 10.1.1.1, “Access Control for Unix Sockets with Permissions and Group Ownership”.
Connecting as user tux
(qemu+ssh://tuxsIVname;/system or
xen+ssh://tuxsIVname;/system) works out
of the box and does not require additional configuration on the
libvirt side.
When connecting via SSH
qemu+ssh://USER@SYSTEM
or
xen+ssh://USER@SYSTEM
you need to provide the password for USER.
This can be avoided by copying your public key to
~USER/.ssh/authorized_keys
on the VM Host Server as explained in
Section 14.5.2, “Copying an SSH Key”. Using an ssh-agent on the
machine from which you are connecting adds even more
convenience. For more information, see
Section 14.5.3, “Using the ssh-agent”.
qemu+tls or xen+tls) #Using TCP connections with TLS/SSL encryption and authentication via x509 certificates is much more complicated to set up than SSH, but it is a lot more scalable. Use this method if you need to manage several VM Host Servers with a varying number of administrators.
TLS (Transport Layer Security) encrypts the communication between two computers by using certificates. The computer starting the connection is always considered the “client”, using a “client certificate”, while the receiving computer is always considered the “server”, using a “server certificate”. This scenario applies, for example, if you manage your VM Host Servers from a central desktop.
If connections are initiated from both computers, each needs to have a client and a server certificate. This is the case, for example, if you migrate a VM Guest from one host to another.
Each x509 certificate has a matching private key file. Only the combination of certificate and private key file can identify itself correctly. To assure that a certificate was issued by the assumed owner, it is signed and issued by a central certificate called certificate authority (CA). Both the client and the server certificates must be issued by the same CA.
Using a remote TLS/SSL connection only ensures that two computers are allowed to communicate in a certain direction. Restricting access to certain users can indirectly be achieved on the client side by restricting access to the certificates. For more information, see Section 10.3.2.5, “Restricting Access (Security Considerations)”.
libvirt also supports user authentication on the server with
SASL. For more information, see
Section 10.3.2.6, “Central User Authentication with SASL for TLS Sockets”.
The VM Host Server is the machine receiving connections. Therefore, the
server certificates need to be installed. The CA
certificate needs to be installed, too. When the certificates are
in place, TLS support can be turned on for libvirt.
Create the server certificate and export it together with the CA certificate as described in Section A.1, “Generating x509 Client/Server Certificates”.
Create the following directories on the VM Host Server:
tux >sudomkdir -p /etc/pki/CA/ /etc/pki/libvirt/private/
Install the certificates as follows:
tux >sudo/etc/pki/CA/cacert.pemtux >sudo/etc/pki/libvirt/servercert.pemtux >sudo/etc/pki/libvirt/private/serverkey.pem
Make sure to restrict access to certificates as explained in Section 10.3.2.5, “Restricting Access (Security Considerations)”.
Enable TLS support by editing
/etc/libvirt/libvirtd.conf and setting
listen_tls = 1. Restart libvirtd:
tux >sudosystemctl restart libvirtd
By default, libvirt uses the TCP port 16514 for accepting secure
TLS connections. Open this port in the firewall.
libvirtd with TLS enabled
If you enable TLS for libvirt, the server certificates need to be
in place, otherwise restarting libvirtd will fail. You also need
to restart libvirtd in case you change the certificates.
The client is the machine initiating connections. Therefore the client certificates need to be installed. The CA certificate needs to be installed, too.
Create the client certificate and export it together with the CA certificate as described in Section A.1, “Generating x509 Client/Server Certificates”.
Create the following directories on the client:
tux >sudomkdir -p /etc/pki/CA/ /etc/pki/libvirt/private/
Install the certificates as follows:
tux >sudo/etc/pki/CA/cacert.pemtux >sudo/etc/pki/libvirt/clientcert.pemtux >sudo/etc/pki/libvirt/private/clientkey.pem
Make sure to restrict access to certificates as explained in Section 10.3.2.5, “Restricting Access (Security Considerations)”.
Test the client/server setup by issuing the following command. Replace mercury.example.com with the name of your VM Host Server. Specify the same fully qualified host name as used when creating the server certificate.
#QEMU/KVM virsh -c qemu+tls://mercury.example.com/system list --all #Xen virsh -c xen+tls://mercury.example.com/system list --all
If your setup is correct, you will see a list of all VM Guests
registered with libvirt on the VM Host Server.
Currently, VNC communication over TLS is only supported by a few tools.
Common VNC viewers such as tightvnc or
tigervnc do not support TLS/SSL. The only supported
alternative to Virtual Machine Manager and virt-viewer is
remmina (refer to Section 4.2, “Remmina: the Remote Desktop Client”).
To access the graphical console via VNC over TLS/SSL, you need to configure the VM Host Server as follows:
Open ports for the service
VNC in your firewall.
Create a directory /etc/pki/libvirt-vnc and
link the certificates into this directory as follows:
tux >sudomkdir -p /etc/pki/libvirt-vnc && cd /etc/pki/libvirt-vnctux >sudoln -s /etc/pki/CA/cacert.pem ca-cert.pemtux >sudoln -s /etc/pki/libvirt/servercert.pem server-cert.pemtux >sudoln -s /etc/pki/libvirt/private/serverkey.pem server-key.pem
Edit /etc/libvirt/qemu.conf and set the
following parameters:
vnc_listen = "0.0.0.0"
vnc_tls = 1
vnc_tls_x509_verify = 1
Restart the libvirtd:
tux >sudosystemctl restart libvirtd
The VNC TLS setting is only set when starting a VM Guest. Therefore, you need to restart all machines that have been running prior to making the configuration change.
The only action needed on the client side is to place the x509 client
certificates in a location recognized by the client of choice.
Unfortunately, Virtual Machine Manager and virt-viewer, expect the
certificates in a different location. Virtual Machine Manager can either read from a
system-wide location applying to all users, or from a per-user location.
Remmina (refer to Section 4.2, “Remmina: the Remote Desktop Client”) asks for the location of
certificates when initializing the connection to the remote VNC session.
virt-manager)
To connect to the remote host, Virtual Machine Manager requires the setup explained in Section 10.3.2.3, “Configuring the Client and Testing the Setup”. To be able to connect via VNC, the client certificates also need to be placed in the following locations:
/etc/pki/CA/cacert.pem
|
/etc/pki/libvirt-vnc/clientcert.pem
|
/etc/pki/libvirt-vnc/private/clientkey.pem
|
/etc/pki/CA/cacert.pem
|
~/.pki/libvirt-vnc/clientcert.pem
|
~/.pki/libvirt-vnc/private/clientkey.pem
|
virt-viewer
virt-viewer only accepts certificates from a
system-wide location:
/etc/pki/CA/cacert.pem
|
/etc/pki/libvirt-vnc/clientcert.pem
|
/etc/pki/libvirt-vnc/private/clientkey.pem
|
Make sure to restrict access to certificates as explained in Section 10.3.2.5, “Restricting Access (Security Considerations)”.
Each x509 certificate consists of two pieces: the public certificate and a private key. A client can only authenticate using both pieces. Therefore, any user that has read access to the client certificate and its private key can access your VM Host Server. On the other hand, an arbitrary machine equipped with the full server certificate can pretend to be the VM Host Server. Since this is probably not desirable, access to at least the private key files needs to be restricted as much as possible. The easiest way to control access to a key file is to use access permissions.
Server certificates need to be readable for QEMU processes. On
openSUSE Leap QEMU, processes started from libvirt tools
are owned by root, so it is sufficient if the root can
read the certificates:
tux >chmod 700 /etc/pki/libvirt/private/tux >chmod 600 /etc/pki/libvirt/private/serverkey.pem
If you change the ownership for QEMU processes in
/etc/libvirt/qemu.conf, you also need to adjust
the ownership of the key file.
To control access to a key file that is available system-wide,
restrict read access to a certain group, so that only members of
that group can read the key file. In the following example, a group
libvirt is created, and
group ownership of the clientkey.pem file and
its parent directory is set to
libvirt. Afterward, the
access permissions are restricted to owner and group. Finally the
user tux is added to the group
libvirt, and thus can
access the key file.
CERTPATH="/etc/pki/libvirt/" # create group libvirt groupadd libvirt # change ownership to user root and group libvirt chown root.libvirt $CERTPATH/private $CERTPATH/clientkey.pem # restrict permissions chmod 750 $CERTPATH/private chmod 640 $CERTPATH/private/clientkey.pem # add user tux to group libvirt usermod --append --groups libvirt tux
User-specific client certificates for accessing the graphical
console of a VM Guest via VNC need to be placed in the user's
home directory in ~/.pki. Contrary to SSH, for
example, the VNC viewer using these certificates do not check the
access permissions of the private key file. Therefore, it is solely
the user's responsibility to make sure the key file is not readable
by others.
By default, every client that is equipped with appropriate client certificates may connect to a VM Host Server accepting TLS connections. Therefore, it is possible to use additional server-side authentication with SASL as described in Section 10.1.1.3, “User name and Password Authentication with SASL”.
It is also possible to restrict access with a whitelist of DNs (distinguished names), so only clients with a certificate matching a DN from the list can connect.
Add a list of allowed DNs to tls_allowed_dn_list in
/etc/libvirt/libvirtd.conf. This list may contain
wild cards. Do not specify an empty list, since that would result in
refusing all connections.
tls_allowed_dn_list = [ "C=US,L=Provo,O=SUSE Linux Products GmbH,OU=*,CN=venus.example.com,EMAIL=*", "C=DE,L=Nuremberg,O=SUSE Linux Products GmbH,OU=Documentation,CN=*"]
Get the distinguished name of a certificate with the following command:
tux > certtool -i --infile /etc/pki/libvirt/clientcert.pem | grep "Subject:"
Restart libvirtd after having changed the configuration:
tux >sudosystemctl restart libvirtd
A direct user authentication via TLS is not possible—this is
handled indirectly on each client via the read permissions for the
certificates as explained in
Section 10.3.2.5, “Restricting Access (Security Considerations)”. However, if
a central, server-based user authentication is needed, libvirt
also allows to use SASL (Simple Authentication and Security Layer) on
top of TLS for direct user authentication. See
Section 10.1.1.3, “User name and Password Authentication with SASL” for
configuration details.
virsh Cannot Connect to Server #Check the following in the given order:
| Is it a firewall issue (TCP port 16514 needs to be open on the server)? |
Is the client certificate (certificate and key) readable by the
user that has started
Virtual Machine Manager/virsh?
|
| Has the same full qualified host name as in the server certificate been specified with the connection? |
Is TLS enabled on the server (listen_tls =
1)? |
Has libvirtd been
restarted on the server? |
Ensure that you can connect to the remote server using Virtual Machine Manager. If
so, check whether the virtual machine on the server has been started
with TLS support. The virtual machine's name in the following example
is sles.
tux > ps ax | grep qemu | grep "\-name sles" | awk -F" -vnc " '{ print FS $2 }'If the output does not begin with a string similar to the following, the machine has not been started with TLS support and must be restarted.
-vnc 0.0.0.0:0,tls,x509verify=/etc/pki/libvirt
When managing a VM Guest on the VM Host Server itself, you can access the complete
file system of the VM Host Server to attach or create virtual hard disks or to
attach existing images to the VM Guest. However, this is not possible when
managing VM Guests from a remote host. For this reason, libvirt supports
so called “Storage Pools”, which can be accessed from remote
machines.
To be able to access CD/DVD ISO images on the VM Host Server from remote, they also need to be placed in a storage pool.
libvirt knows two different types of storage: volumes and pools.
A storage volume is a storage device that can be assigned to a guest—a virtual disk or a CD/DVD/floppy image. Physically (on the VM Host Server) it can be a block device (a partition, a logical volume, etc.) or a file.
A storage pool is a storage resource on the VM Host Server that can be used for storing volumes, similar to network storage for a desktop machine. Physically it can be one of the following types:
A directory for hosting image files. The files can be either one of the supported disk formats (raw, qcow2, or qed), or ISO images.
Use a complete physical disk as storage. A partition is created for each volume that is added to the pool.
Specify a partition to be used in the same way as a file system
directory pool (a directory for hosting image files). The only
difference to using a file system directory is that libvirt takes
care of mounting the device.
Set up a pool on an iSCSI target. You need to have been logged in to
the volume once before, to use it with libvirt. Use the YaST
to detect and log in to a
volume. Volume creation on iSCSI pools is not supported,
instead each existing Logical Unit Number (LUN) represents a volume.
Each volume/LUN also needs a valid (empty) partition table or disk
label before you can use it. If missing, use fdisk
to add it:
~ # fdisk -cu /dev/disk/by-path/ip-192.168.2.100:3260-iscsi-iqn.2010-10.com.example:[...]-lun-2 Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel with disk identifier 0xc15cdc4e. Changes will remain in memory only, until you decide to write them. After that, of course, the previous content won't be recoverable. Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Use an LVM volume group as a pool. You may either use a predefined volume group, or create a group by specifying the devices to use. Storage volumes are created as partitions on the volume.
When the LVM-based pool is deleted in the Storage Manager, the volume group is deleted as well. This results in a non-recoverable loss of all data stored on the pool!
At the moment, multipathing support is limited to assigning existing
devices to the guests. Volume creation or configuring multipathing from
within libvirt is not supported.
Specify a network directory to be used in the same way as a file system
directory pool (a directory for hosting image files). The only
difference to using a file system directory is that libvirt takes
care of mounting the directory. Supported protocols are NFS and
GlusterFS.
Use an SCSI host adapter in almost the same way as an iSCSI target. We
recommend to use a device name from
/dev/disk/by-* rather than
/dev/sdX. The
latter can change (for example, when adding or removing hard disks).
Volume creation on iSCSI pools is not supported. Instead, each existing
LUN (Logical Unit Number) represents a volume.
To avoid data loss or data corruption, do not attempt to use resources such
as LVM volume groups, iSCSI targets, etc., that are also used to build
storage pools on the VM Host Server. There is no need to connect to these
resources from the VM Host Server or to mount them on the VM Host Server—libvirt
takes care of this.
Do not mount partitions on the VM Host Server by label. Under certain circumstances it is possible that a partition is labeled from within a VM Guest with a name already existing on the VM Host Server.
The Virtual Machine Manager provides a graphical interface—the Storage Manager—to manage storage volumes and pools. To access it, either right-click a connection and choose , or highlight a connection and choose › . Select the tab.
To add a storage pool, proceed as follows:
Click in the bottom left corner. The dialog appears.
Provide a for the pool (consisting of
alphanumeric characters and _-.) and select a
. Proceed with .
Specify the required details in the following window. The data that needs to be entered depends on the type of pool you are creating:
: Specify an existing directory.
: The directory that hosts the
devices. The default value /dev should usually
fit.
: Format of the device's partition table.
Using should usually work. If not, get the
required format by running the command parted
-l on the VM Host Server.
: Path to the device. It is
recommended to use a device name from
/dev/disk/by-* rather than the simple
/dev/sdX, since the
latter can change (for example, when adding or removing hard disks).
You need to specify the path that resembles the whole disk, not a
partition on the disk (if existing).
: Activating this option formats the device. Use with care—all data on the device will be lost!
: Mount point on the VM Host Server file system.
File system format of the device. The
default value auto should work.
: Path to the device file. It is
recommended to use a device name from
/dev/disk/by-* rather than
/dev/sdX, because
the latter can change (for example, when adding or removing hard
disks).
Get the necessary data by running the following command on the VM Host Server:
tux >sudoiscsiadm --mode node
It will return a list of iSCSI volumes with the following format. The elements in bold text are required:
IP_ADDRESS:PORT,TPGT TARGET_NAME_(IQN)
: The directory containing the device
file. Use /dev/disk/by-path (default) or
/dev/disk/by-id.
: Host name or IP address of the iSCSI server.
: The iSCSI target name (IQN).
: In case you use an existing volume
group, specify the existing device path. When building a new LVM
volume group, specify a device name in the /dev
directory that does not already exist.
: Leave empty when using an existing volume group. When creating a new one, specify its devices here.
: Only activate when creating a new volume group.
: Support for multipathing is currently limited to making all multipath devices available. Therefore, specify an arbitrary string here that will then be ignored. The path is required, otherwise the XML parser will fail.
: Mount point on the VM Host Server file system.
: IP address or host name of the server exporting the network file system.
: Directory on the server that is being exported.
: The directory containing the device
file. Use /dev/disk/by-path (default) or
/dev/disk/by-id.
: Name of the SCSI adapter.
Using the file browser by clicking is not possible when operating from remote.
Click to add the storage pool.
Virtual Machine Manager's Storage Manager lets you create or delete volumes in a pool. You may also temporarily deactivate or permanently delete existing storage pools. Changing the basic configuration of a pool is currently not supported by SUSE.
The purpose of storage pools is to provide block devices located on the VM Host Server that can be added to a VM Guest when managing it from remote. To make a pool temporarily inaccessible from remote, click in the bottom left corner of the Storage Manager. Stopped pools are marked with and are grayed out in the list pane. By default, a newly created pool will be automatically started of the VM Host Server.
To start an inactive pool and make it available from remote again, click in the bottom left corner of the Storage Manager.
Volumes from a pool attached to VM Guests are always available, regardless of the pool's state ( (stopped) or (started)). The state of the pool solely affects the ability to attach volumes to a VM Guest via remote management.
To permanently make a pool inaccessible, click in the bottom left corner of the Storage Manager. You may only delete inactive pools. Deleting a pool does not physically erase its contents on VM Host Server—it only deletes the pool configuration. However, you need to be extra careful when deleting pools, especially when deleting LVM volume group-based tools:
Deleting storage pools based on local file system directories, local partitions or disks has no effect on the availability of volumes from these pools currently attached to VM Guests.
Volumes located in pools of type iSCSI, SCSI, LVM group or Network Exported Directory will become inaccessible from the VM Guest if the pool is deleted. Although the volumes themselves will not be deleted, the VM Host Server will no longer have access to the resources.
Volumes on iSCSI/SCSI targets or Network Exported Directory will become accessible again when creating an adequate new pool or when mounting/accessing these resources directly from the host system.
When deleting an LVM group-based storage pool, the LVM group definition will be erased and the LVM group will no longer exist on the host system. The configuration is not recoverable and all volumes from this pool are lost.
Virtual Machine Manager lets you create volumes in all storage pools, except in pools of
types Multipath, iSCSI, or SCSI. A volume in these pools is equivalent to
a LUN and cannot be changed from within libvirt.
A new volume can either be created using the Storage Manager or while adding a new storage device to a VM Guest. In either case, select a storage pool from the left panel, then click .
Specify a for the image and choose an image format.
Note that SUSE currently only supports raw,
qcow2, or qed images. The latter
option is not available on LVM group-based pools.
Next to , specify the amount maximum size
that the disk image is allowed to reach. Unless you are working with a
qcow2 image, you can also set an amount for
that should be allocated initially. If
both values differ, a sparse image file will be created which grows on
demand.
For qcow2 images, you can use a (also called “backing file”) which
constitutes a base image. The newly created qcow2
image will then only record the changes that are made to the base image.
Start the volume creation by clicking .
Deleting a volume can only be done from the Storage Manager, by selecting a volume and clicking . Confirm with .
Volumes can be deleted even if they are currently used in an active or inactive VM Guest. There is no way to recover a deleted volume.
Whether a volume is used by a VM Guest is indicated in the column in the Storage Manager.
virsh #
Managing storage from the command line is also possible by using
virsh. However, creating storage pools is currently not
supported by SUSE. Therefore, this section is restricted to documenting
functions like starting, stopping and deleting pools and volume management.
A list of all virsh subcommands for managing pools and
volumes is available by running virsh help pool and
virsh help volume, respectively.
List all pools currently active by executing the following command. To also
list inactive pools, add the option --all:
tux > virsh pool-list --details
Details about a specific pool can be obtained with the
pool-info subcommand:
tux > virsh pool-info POOLVolumes can only be listed per pool by default. To list all volumes from a pool, enter the following command.
tux > virsh vol-list --details POOL
At the moment virsh offers no tools to show whether a
volume is used by a guest or not. The following procedure describes a way
to list volumes from all pools that are currently used by a VM Guest.
Create an XSLT style sheet by saving the following content to a file, for example, ~/libvirt/guest_storage_list.xsl:
<?xml version="1.0" encoding="UTF-8"?>
<xsl:stylesheet version="1.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="text"/>
<xsl:template match="text()"/>
<xsl:strip-space elements="*"/>
<xsl:template match="disk">
<xsl:text> </xsl:text>
<xsl:value-of select="(source/@file|source/@dev|source/@dir)[1]"/>
<xsl:text> </xsl:text>
</xsl:template>
</xsl:stylesheet>
Run the following commands in a shell. It is assumed that the guest's XML
definitions are all stored in the default location
(/etc/libvirt/qemu). xsltproc is
provided by the package
libxslt.
SSHEET="$HOME/libvirt/guest_storage_list.xsl" cd /etc/libvirt/qemu for FILE in *.xml; do basename $FILE .xml xsltproc $SSHEET $FILE done
Use the virsh pool subcommands to start, stop or delete
a pool. Replace POOL with the pool's name or its
UUID in the following examples:
tux > virsh pool-destroy POOLVolumes from a pool attached to VM Guests are always available, regardless of the pool's state ( (stopped) or (started)). The state of the pool solely affects the ability to attach volumes to a VM Guest via remote management.
tux > virsh pool-delete POOLtux > virsh pool-start POOLtux > virsh pool-autostart POOLOnly pools that are marked to autostart will automatically be started if the VM Host Server reboots.
tux > virsh pool-autostart POOL --disable
virsh offers two ways to add volumes to storage pools:
either from an XML definition with vol-create and
vol-create-from or via command line arguments with
vol-create-as. The first two methods are currently not
supported by SUSE, therefore this section focuses on the subcommand
vol-create-as.
To add a volume to an existing pool, enter the following command:
tux > virsh vol-create-as POOL1NAME2 12G --format3raw|qcow2|qed4 --allocation 4G5Name of the pool to which the volume should be added | |
Name of the volume | |
Size of the image, in this example 12 gigabytes. Use the suffixes k, M, G, T for kilobyte, megabyte, gigabyte, and terabyte, respectively. | |
Format of the volume. SUSE currently supports | |
Optional parameter. By default
When not specifying this parameter, a sparse image file with no
allocation will be generated. To create a non-sparse volume, specify the
whole image size with this parameter (would be |
Another way to add volumes to a pool is to clone an existing volume. The new instance is always created in the same pool as the original.
tux > virsh vol-clone NAME_EXISTING_VOLUME1NAME_NEW_VOLUME2 --pool POOL3
To permanently delete a volume from a pool, use the subcommand
vol-delete:
tux > virsh vol-delete NAME --pool POOL
--pool is optional. libvirt tries to locate the volume
automatically. If that fails, specify this parameter.
A volume will be deleted in any case, regardless of whether it is currently used in an active or inactive VM Guest. There is no way to recover a deleted volume.
Whether a volume is used by a VM Guest can only be detected by using by the method described in Procedure 11.1, “Listing all Storage Volumes Currently Used on a VM Host Server”.
After you create a volume as described in Section 11.2.3, “Adding Volumes to a Storage Pool”, you can attach it to a virtual machine and use it as a hard disk:
tux > virsh attach-disk DOMAIN SOURCE_IMAGE_FILE TARGET_DISK_DEVICEFor example:
tux > virsh attach-disk sles12sp3 /virt/images/example_disk.qcow2 sda2
To check if the new disk is attached, inspect the result of the
virsh dumpxml command:
root # virsh dumpxml sles12sp3
[...]
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/virt/images/example_disk.qcow2'/>
<backingStore/>
<target dev='sda2' bus='scsi'/>
<alias name='scsi0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
[...]
You can attach disks to both active and inactive domains. The attachment
is controlled by the --live and --config
options:
--live
Hotplugs the disk to an active domain. The attachment is not saved in
the domain configuration. Using --live on an inactive
domain is an error.
--config
Changes the domain configuration persistently. The attached disk is then available after the next domain start.
--live--config
Hotplugs the disk and adds it to the persistent domain configuration.
virsh attach-device
virsh attach-device is the more generic form of
virsh attach-disk. You can use it to attach other
types of devices to a domain.
To detach a disk from a domain, use virsh detach-disk:
root # virsh detach-disk DOMAIN TARGET_DISK_DEVICEFor example:
root # virsh detach-disk sles12sp3 sda2
You can control the attachment with the --live and
--config options as described in
Section 11.2.5, “Attaching Volumes to a VM Guest”.
virtlockd #Locking block devices and disk files prevents concurrent writes to these resources from different VM Guests. It provides protection against starting the same VM Guest twice, or adding the same disk to two different virtual machines. This will reduce the risk of a virtual machine's disk image becoming corrupted because of a wrong configuration.
The locking is controlled by a daemon called
virtlockd. Since it operates
independently from the libvirtd daemon, locks will endure a crash or a
restart of libvirtd. Locks will even persist in the case of an update of
the virtlockd itself, since it can
re-execute itself. This ensures that VM Guests do not
need to be restarted upon a
virtlockd update.
virtlockd is supported for KVM,
QEMU, and Xen.
Locking virtual disks is not enabled by default on openSUSE Leap. To enable and automatically start it upon rebooting, perform the following steps:
Edit /etc/libvirt/qemu.conf and set
lock_manager = "lockd"
Start the virtlockd daemon with
the following command:
tux >sudosystemctl start virtlockd
Restart the libvirtd daemon with:
tux >sudosystemctl restart libvirtd
Make sure virtlockd is
automatically started when booting the system:
tux >sudosystemctl enable virtlockd
By default virtlockd is configured
to automatically lock all disks configured for your VM Guests. The default
setting uses a "direct" lockspace, where the locks are acquired against the
actual file paths associated with the VM Guest <disk> devices. For
example, flock(2) will be called directly on
/var/lib/libvirt/images/my-server/disk0.raw when the
VM Guest contains the following <disk> device:
<disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/my-server/disk0.raw'/> <target dev='vda' bus='virtio'/> </disk>
The virtlockd configuration can be
changed by editing the file
/etc/libvirt/qemu-lockd.conf. It also contains
detailed comments with further information. Make sure to activate
configuration changes by reloading
virtlockd:
tux >sudosystemctl reload virtlockd
Currently, locking can only be activated globally, so that all virtual disks are locked. Support for locking selected disks is planned for future releases.
When wanting to lock virtual disks placed on LVM or iSCSI volumes shared by several hosts, locking needs to be done by UUID rather than by path (which is used by default). Furthermore, the lockspace directory needs to be placed on a shared file system accessible by all hosts sharing the volume. Set the following options for LVM and/or iSCSI:
lvm_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY" iscsi_lockspace_dir = "/MY_LOCKSPACE_DIRECTORY"
Sometimes you need to change—extend or shrink—the size of the
block device used by your guest system. For example, when the disk space
originally allocated is no longer enough, it is time to increase its size.
If the guest disk resides on a logical volume, you can
resize it while the guest system is running. This is a big advantage over an
offline disk resizing (see the virt-resize command from
the Section 16.3, “Guestfs Tools” package) as the service provided by
the guest is not interrupted by the resizing process. To resize a VM Guest
disk, follow these steps:
Inside the guest system, check the current size of the disk (for example
/dev/vda).
root # fdisk -l /dev/vda
Disk /dev/sda: 160.0 GB, 160041885696 bytes, 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
On the host, resize the logical volume holding the
/dev/vda disk of the guest to the required size, for
example 200 GB.
root # lvresize -L 2048M /dev/mapper/vg00-home
Extending logical volume home to 2.00 GiB
Logical volume home successfully resized
On the host, resize the block device related to the disk
/dev/mapper/vg00-home of the guest. Note that you can
find the DOMAIN_ID with virsh
list.
root # virsh blockresize --path /dev/vg00/home --size 2048M DOMAIN_ID
Block device '/dev/vg00/home' is resizedCheck that the new disk size is accepted by the guest.
root # fdisk -l /dev/vda
Disk /dev/sda: 200.0 GB, 200052357120 bytes, 390727260 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 byteslibvirt #
RADOS Block Devices (RBD) store data in a Ceph cluster. They allow snapshotting,
replication, and data consistency. You can use an RBD from your
libvirt-managed VM Guests similarly you use other block devices.
Refer to SUSE Enterprise Storage documentation for more details.
This chapter introduces common networking configurations supported by
libvirt. It does not depend on the hypervisor used. It is valid for all
hypervisors supported by libvirt, such as KVM or Xen. These setups
can be achieved using both the graphical interface of Virtual Machine Manager and the command
line tool virsh.
There are two common network setups to provide a VM Guest with a network connection:
A virtual network for the guest
A network bridge over a host's physical network interface that the guest can use
A virtual network is a computer network which does not consist of a physical network link, but rather uses a virtual network link. Each host can have several virtual networks defined. Virtual networks are based on virtual devices that connect virtual machines inside a hypervisor. They allow outgoing traffic translated to the LAN and are provided with DHCP and DNS services. Virtual networks can be either isolated, or forwarded to a physical network.
Guests inside an isolated virtual network can communicate with each other, but cannot communicate with guests outside the virtual network. Also, guests not belonging to the isolated virtual network cannot communicate with guests inside.
On the other hand, guests inside a forwarded (NAT,
network address translation) virtual network can make any outgoing network
connection they request. Incoming connections are allowed from VM Host Server, and
from other guests connected to the same virtual network. All other incoming
connections are blocked by iptables rules.
A standard libvirt installation on openSUSE Leap already comes with a predefined virtual network providing DHCP server and network address translation (NAT) named "default".
You can define, configure, and operate both isolated and forwarded virtual networks with Virtual Machine Manager.
Start Virtual Machine Manager. In the list of available connections, right-click the name of the connection for which you need to configure the virtual network, and then select .
In the window, click the tab. You can see the list of all virtual networks available for the current connection. On the right, there are details of the selected virtual network.
To add a new virtual network, click .
Specify a name for the new virtual network and click .
To specify an IPv4 network address space definition, activate the relevant option and type it into the text entry.
libvirt can provide your virtual network with a DHCP server. If you
need it, activate , then type the start
and end IP address range of assignable addresses.
To enable static routing for the new virtual network, activate the relevant option and type the network and gateway addresses.
Click to proceed.
To specify IPv6-related options—network address space, DHCPv6 server, or static route—activate and activate the relevant options and fill in the relevant boxes.
Click to proceed.
Select whether you want isolated or forwarded virtual network.
For forwarded networks, specify the network interface to which the requests will be forwarded, and one of the forwarding modes: While (network address translation) remaps the virtual network address space and allows sharing a single IP address, connects the virtual switch to the physical host LAN with no network translation.
If you did not specify IPv6 network address space definition earlier, you can enable IPv6 internal routing between virtual machines.
(Optional) Optionally, change the DNS domain name.
Click to create the new virtual network. On
the VM Host Server, a new virtual network bridge
virbrX is available, which
corresponds to the newly created virtual network. You can check with
bridge link. libvirt automatically adds iptables
rules to allow traffic to/from guests attached to the new
virbrX device.
To start a virtual network that is temporarily stopped, follow these steps:
Start Virtual Machine Manager. In the list of available connections, right-click the name of the connection for which you need to configure the virtual network, and then select .
In the window, click the tab. You can see the list of all virtual networks available for the current connection.
To start the virtual network, click .
To stop an active virtual network, follow these steps:
Start Virtual Machine Manager. In the list of available connections, right-click the name of the connection for which you need to configure the virtual network, and then select .
In the window, click the tab. You can see the list of all virtual networks available for the current connection.
Select the virtual network to be stopped, then click .
To delete a virtual network from VM Host Server, follow these steps:
Start Virtual Machine Manager. In the list of available connections, right-click the name of the connection for which you need to configure the virtual network, and then select .
In the window, click the tab. You can see the list of all virtual networks available for the current connection.
Select the virtual network to be deleted, then click .
nsswitch for NAT Networks (in KVM) #On VM Host Server, install libvirt-nss, which provides NSS support for libvirt:
tux >sudozypper in libvirt-nss
Add libvirt to
/etc/nsswitch.conf:
... hosts: files libvirt mdns_minimal [NOTFOUND=return] dns ...
If NSCD is running, restart it:
tux >sudosystemctl restart nscd
Now you can reach the guest system by name from the host.
The NSS module has limited functionality. It reads
/var/lib/libvirt/dnsmasq/*.status files to find the
host name and corresponding IP addresses in a JSON record describing each
lease provided by dnsmasq. Host name translation can
only be done on those VM Host Servers using a libvirt-managed bridged network
backed by dnsmasq.
For more information, see http://wiki.libvirt.org/page/NSS_module.
virsh #
You can manage libvirt-provided virtual networks with the
virsh command line tool. To view all network related
virsh commands, run
tux >sudovirsh help network Networking (help keyword 'network'): net-autostart autostart a network net-create create a network from an XML file net-define define (but don't start) a network from an XML file net-destroy destroy (stop) a network net-dumpxml network information in XML net-edit edit XML configuration for a network net-event Network Events net-info network information net-list list networks net-name convert a network UUID to network name net-start start a (previously defined) inactive network net-undefine undefine an inactive network net-update update parts of an existing network's configuration net-uuid convert a network name to network UUID
To view brief help information for a specific virsh
command, run virsh help
VIRSH_COMMAND:
tux >sudovirsh help net-create NAME net-create - create a network from an XML file SYNOPSIS net-create <file> DESCRIPTION Create a network. OPTIONS [--file] <string> file containing an XML network description
To create a new running virtual network, run
tux >sudovirsh net-create VNET_DEFINITION.xml
The VNET_DEFINITION.xml XML file includes the
definition of the virtual network that libvirt accepts.
To define a new virtual network without activating it, run
tux >sudovirsh net-define VNET_DEFINITION.xml
The following examples illustrate definitions of different types of virtual networks.
The following configuration allows VM Guests outgoing connectivity if it is available on VM Host Server. In the absence of VM Host Server networking, it allows guests to talk directly to each other.
<network> <name>vnet_nated</name>1 <bridge name="virbr1" />2 <forward mode="nat"/>3 <ip address="192.168.122.1" netmask="255.255.255.0">4 <dhcp> <range start="192.168.122.2" end="192.168.122.254" />5 <host mac="52:54:00:c7:92:da" name="host1.testing.com" \ ip="192.168.1.23.101" />6 <host mac="52:54:00:c7:92:db" name="host2.testing.com" \ ip="192.168.1.23.102" /> <host mac="52:54:00:c7:92:dc" name="host3.testing.com" \ ip="192.168.1.23.103" /> </dhcp> </ip> </network>
The name of the new virtual network. | |
The name of the bridge device used to construct the virtual network.
When defining a new network with a <forward> mode of "nat" or
"route" (or an isolated network with no <forward> element),
| |
Inclusion of the <forward> element indicates that the virtual
network will be connected to the physical LAN. The
| |
The IP address and netmask for the network bridge. | |
Enable DHCP server for the virtual network, offering IP addresses
ranging from the specified | |
The optional <host> elements specify hosts that will be given names
and predefined IP addresses by the built-in DHCP server. Any IPv4 host
element must specify the following: the MAC address of the host to be assigned a given
name, the IP to be assigned to that host, and the name to be given to
that host by the DHCP server. An IPv6 host element differs slightly
from that for IPv4: there is no |
The following configuration routes traffic from the virtual network to the LAN without applying any NAT. The IP address range must be preconfigured in the routing tables of the router on the VM Host Server network.
<network>
<name>vnet_routed</name>
<bridge name="virbr1" />
<forward mode="route" dev="eth1"/>1
<ip address="192.168.122.1" netmask="255.255.255.0">
<dhcp>
<range start="192.168.122.2" end="192.168.122.254" />
</dhcp>
</ip>
</network>
The guest traffic may only go out via the |
This configuration provides a completely isolated private network. The guests can talk to each other, and to VM Host Server, but cannot reach any other machines on the LAN, as the <forward> element is missing in the XML description.
<network> <name>vnet_isolated</name> <bridge name="virbr3" /> <ip address="192.168.152.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.152.2" end="192.168.152.254" /> </dhcp> </ip> </network>
This configuration shows how to use an existing VM Host Server's network bridge
br0. VM Guests are directly connected to the physical
network. Their IP addresses will all be on the subnet of the physical
network, and there will be no restrictions on incoming or outgoing
connections.
<network>
<name>host-bridge</name>
<forward mode="bridge"/>
<bridge name="br0"/>
</network>
To list all virtual networks available to libvirt, run:
tux >sudovirsh net-list --all Name State Autostart Persistent ---------------------------------------------------------- crowbar active yes yes vnet_nated active yes yes vnet_routed active yes yes vnet_isolated inactive yes yes
To list available domains, run:
tux >sudovirsh list Id Name State ---------------------------------------------------- 1 nated_sles12sp1 running ...
To get a list of interfaces of a running domain, run domifaddr
DOMAIN, or optionally specify the
interface to limit the output to this interface. By default, it
additionally outputs their IP and MAC addresses:
tux >sudovirsh domifaddr nated_sles12sp1 --interface vnet0 --source lease Name MAC address Protocol Address ------------------------------------------------------------------------------- vnet0 52:54:00:9e:0d:2b ipv6 fd00:dead:beef:55::140/64 - - ipv4 192.168.100.168/24
To print brief information of all virtual interfaces associated with the specified domain, run:
tux >sudovirsh domiflist nated_sles12sp1 Interface Type Source Model MAC --------------------------------------------------------- vnet0 network vnet_nated virtio 52:54:00:9e:0d:2b
To get detailed information about a network, run:
tux >sudovirsh net-info vnet_routed Name: vnet_routed UUID: 756b48ff-d0c6-4c0a-804c-86c4c832a498 Active: yes Persistent: yes Autostart: yes Bridge: virbr5
To start an inactive network that was already defined, find its name (or unique identifier, UUID) with:
tux >sudovirsh net-list --inactive Name State Autostart Persistent ---------------------------------------------------------- vnet_isolated inactive yes yes
Then run:
tux >sudovirsh net-start vnet_isolated Network vnet_isolated started
To stop an active network, find its name (or unique identifier, UUID) with:
tux >sudovirsh net-list --inactive Name State Autostart Persistent ---------------------------------------------------------- vnet_isolated active yes yes
Then run:
tux >sudovirsh net-destroy vnet_isolated Network vnet_isolated destroyed
To remove the definition of an inactive network from VM Host Server permanently, run:
tux >sudovirsh net-undefine vnet_isolated Network vnet_isolated has been undefined
A network bridge is used to connect two or more network segments. It behaves like a virtual network switch, and guest machines treat it transparently as a physical network interface. Any physical or virtual device can be connected to the bridge.
If there is a network bridge present on VM Host Server, you can connect a VM Guest to it directly. This provides the VM Guest with full incoming and outgoing network access.
This section includes procedures to add or remove network bridges with YaST.
To add a network bridge on VM Host Server, follow these steps:
Start › › .
Activate the tab and click .
Select from the list and enter the bridge device interface name in the entry. Proceed with .
In the tab, specify networking details such as DHCP/static IP address, subnet mask or host name.
Using is only useful when also assigning a device to a bridge that is connected to some DHCP server.
If you intend to create a virtual bridge that has no connection to a
real Ethernet device, use . In this case, it is a good idea to use addresses from
the private IP address ranges, for example,
192.168.x.x or 10.x.x.x.
To create a bridge that should only serve as a connection between the
different guests without connection to the host system, set the IP
address to 0.0.0.0 and the subnet mask to
255.255.255.255. The network scripts handle this
special address as an unset IP address.
Activate the tab and activate the network devices you want to include in the network bridge.
Click to return to the tab and confirm with . The new network bridge should be active on VM Host Server now.
To delete an existing network bridge, follow these steps:
Start › › .
Select the bridge device you want to delete from the list in the tab.
Delete the bridge with and confirm with .
This section includes procedures to add or remove network bridges using the command line.
To add a new network bridge device on VM Host Server, follow these steps:
Log in as root on the VM Host Server where you want to create a new
network bridge.
Choose a name for the new bridge—virbr_test in our example— and run
root # ip link add name VIRBR_TESTCheck if the bridge was created on VM Host Server:
root # bridge vlan
[...]
virbr_test 1 PVID Egress Untagged
virbr_test is present, but is not associated with any
physical network interface.
Bring the network bridge up and add a network interface to the bridge:
root #ip link set virbr_test uproot #ip link set eth1 master virbr_test
You can only enslave a network interface that is not yet used by other network bridge.
Optionally, enable STP (see Spanning Tree Protocol):
root # bridge link set dev virbr_test cost 4To delete an existing network bridge device on VM Host Server from the command line, follow these steps:
Log in as root on the VM Host Server where you want to delete an
existing network bridge.
List existing network bridges to identify the name of the bridge to remove:
root # bridge vlan
[...]
virbr_test 1 PVID Egress UntaggedDelete the bridge:
root # ip link delete dev virbr_testSometimes, it is necessary to create a private connection either between two VM Host Servers or between VM Guest systems. For example, to migrate VM Guest to hosts in a different network segment, or to create a private bridge that only VM Guest systems may connect to (even when running on different VM Host Server systems). An easy way to build such connections is to set up VLAN networks.
VLAN interfaces are commonly set up on the VM Host Server. They either interconnect the different VM Host Server systems, or they may be set up as a physical interface to an otherwise virtual-only bridge. It is even possible to create a bridge with a VLAN as a physical interface that has no IP address in the VM Host Server. That way, the guest systems have no possibility to access the host over this network.
Run the YaST module › . Follow this procedure to set up the VLAN device:
Click to create a new network interface.
In the , select .
Change the value of to the ID of
your VLAN. Note that VLAN ID 1 is commonly used for
management purposes.
Click .
Select the interface that the VLAN device should connect to below . If the desired interface does not appear in the list, first set up this interface without an IP Address.
Select the desired method for assigning an IP address to the VLAN device.
Click to finish the configuration.
It is also possible to use the VLAN interface as a physical interface of a bridge. This makes it possible to connect several VM Host Server-only networks and allows to live-migrate VM Guest systems that are connected to such a network.
YaST does not always allow to set no IP address. However, this may be a
desired feature especially if VM Host Server-only networks should be connected.
In this case, use the special address 0.0.0.0 with
netmask 255.255.255.255. The system scripts handle this
address as no IP address set.
Virtual Machine Manager's view offers in-depth information about the VM Guest's complete configuration and hardware equipment. Using this view, you can also change the guest configuration or add and modify virtual hardware. To access this view, open the guest's console in Virtual Machine Manager and either choose › from the menu, or click in the toolbar.
virsh
The left panel of the window lists VM Guest overview and already
installed hardware. After clicking an item in the list, you can access its
detailed settings in the details view. You can change the hardware
parameters to match your needs, then click to
confirm them. Some changes take effect immediately, while others need a
reboot of the machine—and virt-manager
warns you about that fact.
To remove installed hardware from a VM Guest, select the appropriate list entry in the left panel and then click in the bottom right of the window.
To add new hardware, click below the left panel, then select the type of the hardware you want to add in the window. Modify its parameters and confirm with .
The following sections describe configuration options for the specific hardware type being added. They do not focus on modifying an existing piece of hardware as the options are identical.
This section describes the setup of the virtualized processor and memory hardware. These components are vital to a VM Guest, therefore you cannot remove them. It also shows how to view the overview and performance information, and how to change boot options.
shows basic details about VM Guest and the hypervisor.
, , and are editable and help you identify VM Guest in the list of machines.
shows the universally unique identifier of the virtual machine, while shows its current status—, , or .
The section shows the hypervisor type, CPU architecture, used emulator, and chipset type. None of the hypervisor parameters can be changed.
shows regularly updated charts of CPU and memory usage, and disk and network I/O.
Not all the charts in the view are enabled by default. To enable these charts, go to › , then select › › , and check the charts that you want to see regularly updated.
includes detailed information about VM Guest processor configuration.
In the section, you can configure several parameters related to the number of allocated CPUs.
The real number of CPUs installed on VM Host Server.
The number of currently allocated CPUs. You can hotplug more CPUs by increasing this value up to the value.
Maximum number of allocable CPUs for the current session. Any change to this value will take effect after the next VM Guest reboot.
The section lets you configure the CPU model and topology.
When activated, the option uses the host CPU model for VM Guest. Otherwise you need to specify the CPU model from the drop-down box.
After you activate , you can specify a custom number of sockets, cores and threads for the CPU.
contains information about the memory that is available to VM Guest.
Total amount of memory installed on VM Host Server.
The amount of memory currently available to VM Guest. You can hotplug more memory by increasing this value up to the value of .
The maximum value to which you can hotplug the currently available memory. Any change to this value will take effect after the next VM Guest reboot.
introduces options affecting the VM Guest boot process.
In the section, you can specify whether the virtual machine should automatically start during the VM Host Server boot phase.
In the , activate the devices that will be used for booting VM Guest. You can change their order with the up and down arrow buttons on the right side of the list. To choose from a list of bootable devices on VM Guest start, activate .
To boot a different kernel than the one on the boot device, activate and specify the paths to the alternative kernel and initrd placed on the VM Host Server file system. You can also specify kernel arguments that will be passed to the loaded kernel.
This section gives you a detailed description of configuration options for storage devices. It includes both hard disks and removable media, such as USB or CD-ROM drives.
Click below the left panel, then select from the window.
To create a qcow2 disk image in the default location,
activate and specify its size in gigabytes.
To gain more control over the disk image creation, activate and click to manage storage pools and images. The window opens which has almost identical functionality as the tab described in Section 11.1, “Managing Storage with Virtual Machine Manager”.
SUSE only supports the following storage formats:
raw, qcow2, and
qed.
After you manage to create and specify the disk image file, specify the . It can be one of the following options:
: Does not allow using .
: Does not allow using .
: Required to use an existing SCSI storage directly without adding it into a storage pool.
Select the for your device. The list of available options depends on the device type you selected in the previous step. The types based on use paravirtualized drivers.
In the section, select the preferred . For more information on cache modes, see Chapter 14, Disk Cache Modes.
Confirm your settings with . A new storage device appears in the left panel.
This section focuses on adding and configuring new controllers.
Click below the left panel, then select from the window.
Select the type of the controller. You can choose from , , , , (paravirtualized), , or (smart card devices).
Optionally, in the case of a USB or SCSI controller, select a controller model.
Confirm your settings with . A new controller appears in the left panel.
This section describes how to add and configure new network devices.
Click below the left panel, then select from the window.
From the list, select the source for the network connection. The list includes VM Host Server's available physical network interfaces, network bridges, or network bonds. You can also assign the VM Guest to an already defined virtual network. See Chapter 12, Managing Networks for more information on setting up virtual networks with Virtual Machine Manager.
Specify a for the network device. While Virtual Machine Manager pre-fills a random value for your convenience, it is recommended to supply a MAC address appropriate for your network environment to avoid network conflicts.
Select a device model from the list. You can either leave the , or specify one of , , or models. Note that virtio uses paravirtualized drivers.
Confirm your settings with . A new network device appears in the left panel.
When you click within a VM Guest's console with the mouse, the pointer is captured by the console window and cannot be used outside the console unless it is explicitly released (by pressing Alt–Ctrl). To prevent the console from grabbing the key and to enable seamless pointer movement between host and guest instead, add a tablet to the VM Guest.
Adding a tablet has the additional advantage of synchronizing the mouse pointer movement between VM Host Server and VM Guest when using a graphical environment on the guest. With no tablet configured on the guest, you will often see two pointers with one dragging behind the other.
Double-click a VM Guest entry in the Virtual Machine Manager to open its console and switch to the view with › .
Click and choose and then in the pop-up window. Proceed with .
If the guest is running, you will be asked whether to enable the tablet after the next reboot. Confirm with .
When you start or restart the VM Guest, the tablet becomes available in the VM Guest.
KVM supports CD or DVD-ROMs in VM Guest either by directly accessing a
physical drive on the VM Host Server or by accessing ISO images. To create an
ISO image from an existing CD or DVD, use dd:
tux >sudodd if=/dev/CD_DVD_DEVICE of=my_distro.iso bs=2048
To add a CD/DVD-ROM device to your VM Guest, proceed as follows:
Double-click a VM Guest entry in the Virtual Machine Manager to open its console and switch to the view with › .
Click and choose in the pop-up window.
Change the to .
Select .
To assign the device to a physical medium, enter the path to the
VM Host Server's CD/DVD-ROM device (for example,
/dev/cdrom) next to
. Alternatively, use
to open a file browser and then
click to select the device. Assigning
the device to a physical medium is only possible when the Virtual Machine Manager was
started on the VM Host Server.
To assign the device to an existing image, click to choose an image from a storage pool. If the Virtual Machine Manager was started on the VM Host Server, alternatively choose an image from another location on the file system by clicking . Select an image and close the file browser with .
Save the new virtualized device with .
Reboot the VM Guest to make the new device available. For more information, see Section 13.8, “Ejecting and Changing Floppy or CD/DVD-ROM Media with Virtual Machine Manager”.
Currently KVM only supports the use of floppy disk images—using a
physical floppy drive is not supported. Create a floppy disk image from
an existing floppy using dd:
tux >sudodd if=/dev/fd0 of=/var/lib/libvirt/images/floppy.img
To create an empty floppy disk image use one of the following commands:
tux >sudodd if=/dev/zero of=/var/lib/libvirt/images/floppy.img bs=512 count=2880
tux >sudomkfs.msdos -C /var/lib/libvirt/images/floppy.img 1440
To add a floppy device to your VM Guest, proceed as follows:
Double-click a VM Guest entry in the Virtual Machine Manager to open its console and switch to the view with › .
Click and choose in the pop-up window.
Change the to .
Choose and click to choose an existing image from a storage pool. If Virtual Machine Manager was started on the VM Host Server, alternatively choose an image from another location on the file system by clicking . Select an image and close the file browser with .
Save the new virtualized device with .
Reboot the VM Guest to make the new device available. For more information, see Section 13.8, “Ejecting and Changing Floppy or CD/DVD-ROM Media with Virtual Machine Manager”.
Whether you are using the VM Host Server's physical CD/DVD-ROM
device or an ISO/floppy image: Before you can change the media or image
of an existing device in the VM Guest, you first need to
disconnect the media from the guest.
Double-click a VM Guest entry in the Virtual Machine Manager to open its console and switch to the view with › .
Choose the Floppy or CD/DVD-ROM device and “eject” the medium by clicking .
To “insert” a new medium, click .
If using the VM Host Server's physical CD/DVD-ROM device, first change the media in the device (this may require unmounting it on the VM Host Server before it can be ejected). Then choose and select the device from the drop-down box.
If you are using an ISO image, choose and select an image by clicking . When connecting from a remote host, you may only choose images from existing storage pools.
Click to finish. The new media can now be accessed in the VM Guest.
virsh #
By default, when installing with the virt-install tool, the machine type for VM Guest is
pc-i440fx. The machine type is stored in the
VM Guest's xml configuration file in
/etc/libvirt/qemu/ in the tag type:
<type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
As an example, the following procedure shows how to change this value to the
machine type q35. q35 is an Intel*
chipset. It includes PCIe, supports up to
12 USB ports, and has support for
SATA and
IOMMU. IRQ routing has also
been improved.
Check whether your VM Guest is inactive:
tux >sudovirsh list --inactive Id Name State ---------------------------------------------------- - sles11 shut off
Edit the configuration for this VM Guest:
tux >sudovirsh edit sles11
Change the value of the
machine
attribute:
<type arch='x86_64' machine='pc-q35-2.0'>hvm</type>
Restart the VM Guest.
tux >sudovirsh start sles11
Check that the machine type has changed. Log in to the VM Guest as root and run the following command:
tux >sudodmidecode | grep ProductProduct Name: Standard PC (Q35 + ICH9, 2009)
Whenever the QEMU version on the host system is upgraded (for example,
when upgrading the VM Host Server to a new service pack), upgrade the machine type
of the VM Guests to the latest
available version. To check, use the command qemu-system-x86_64 -M
help on the VM Host Server.
The default machine type pc-i440fx, for example, is
regularly updated. If your VM Guest still runs with a machine type of
pc-i440fx-1.X, an update
to pc-i440fx-2.X is
strongly recommended. This allows taking advantage of the most recent
updates and corrections in machine definitions, and ensures
better future compatibility.
You can directly assign host-PCI devices to guests (PCI pass-through). When the PCI device is assigned to one VM Guest, it cannot be used on the host or by another VM Guest unless it is re-assigned. A prerequisite for this feature is a VM Host Server configuration as described in Important: Requirements for VFIO and SR-IOV.
The following procedure describes how to add a PCI device to a VM Guest using Virtual Machine Manager:
Double-click a VM Guest entry in the Virtual Machine Manager to open its console and switch to the view with › .
Click and choose the category in the left panel. A list of available PCI devices appears in the right part of the window.
From the list of available PCI devices, choose the one you want to pass to the guest. Confirm with .
Although it is possible to assign a PCI device to a running VM Guest as described above, the device will not become available until you shut down the VM Guest and reboot it afterward.
virsh #
To assign a PCI device to VM Guest with virsh,
follow these steps:
Identify the host PCI device to assign to the guest. In the following example, we are assigning a DEC network card to the guest:
tux >sudolspci -nn[...] 03:07.0 Ethernet controller [0200]: Digital Equipment Corporation DECchip \ 21140 [FasterNet] [1011:0009] (rev 22) [...]
Note down the device ID (03:07.0 in this case).
Gather detailed information about the device using virsh
nodedev-dumpxml ID. To get the
ID, you need to replace colon and period in
the device ID (03:07.0) with underscore and prefix
the result with “pci_0000_”
(pci_0000_03_07_0).
tux > virsh nodedev-dumpxml pci_0000_03_07_0
<device>
<name>pci_0000_03_07_0</name>
<path>/sys/devices/pci0000:00/0000:00:14.4/0000:03:07.0</path>
<parent>pci_0000_00_14_4</parent>
<driver>
<name>tulip</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>3</bus>
<slot>7</slot>
<function>0</function>
<product id='0x0009'>DECchip 21140 [FasterNet]</product>
<vendor id='0x1011'>Digital Equipment Corporation</vendor>
<numa node='0'/>
</capability>
</device>Note down the values for domain, bus, and function.
Detach the device from the host system prior to attaching it to VM Guest.
tux > virsh nodedev-detach pci_0000_03_07_0
Device pci_0000_03_07_0 detached
When using a multi-function PCI device that does not support FLR
(function level reset) or PM (power management) reset, you need to
detach all its functions from the VM Host Server. The whole device must be
reset for security reasons. libvirt will
refuse to assign the device if one of its functions is still in use
by the VM Host Server or another VM Guest.
Convert the domain, bus, slot, and function value from decimal to
hexadecimal, and prefix with 0x to tell the
system that the value is hexadecimal. In our example, domain = 0,
bus = 3, slot = 7, and function = 0. Their hexadecimal values are:
tux >printf %x 0 0tux >printf %x 3 3tux >printf %x 7 7
This results in domain = 0x0000, bus = 0x03, slot = 0x07 and function = 0x00.
Run virsh edit on your domain, and add the
following device entry in the <devices>
section using the values from the previous step:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x03' slot='0x07' function='0x0'/>
</source>
</hostdev>managed Compared to unmanaged
libvirt recognizes two modes for handling
PCI devices: they can be either managed or
unmanaged. In the managed case,
libvirt will do handle all details of
unbinding the device from the existing driver if needed, resetting
the device, binding it to vfio-pci before
starting the domain, etc. When the domain is terminated or the device
is removed from the domain, libvirt will
unbind from vfio-pci and rebind to the
original driver in the case of a managed device. If the device is
unmanaged, the user must ensure all of these management
aspects of the device are done before assigning it to a domain, and
after the device is no longer used by the domain.
In the example above, the managed='yes' option
means that the device is managed. To switch the device mode to
unmanaged, set managed='no' in the listing above.
If you do so, you need to take care of the related driver with the
virsh nodedev-detach and virsh
nodedev-reattach commands. That means you need to run
virsh nodedev-detach pci_0000_03_07_0 prior to
starting the VM Guest to detach the device from the host. In case
the VM Guest is not running, you can make the device available for
the host by running virsh nodedev-reattach
pci_0000_03_07_0.
Shut down the VM Guest and restart it to make the assigned PCI device available.
If you are running SELinux on your VM Host Server, you need to disable it prior to starting the VM Guest with
setsebool -P virt_use_sysfs 1
Analogous to assigning host PCI devices (see Section 13.10, “Assigning a Host PCI Device to a VM Guest”), you can directly assign host USB devices to guests. When the USB device is assigned to one VM Guest, it cannot be used on the host or by another VM Guest unless it is re-assigned.
To assign a host USB device to VM Guest using Virtual Machine Manager, follow these steps:
Double-click a VM Guest entry in the Virtual Machine Manager to open its console and switch to the view with › .
Click and choose the category in the left panel. A list of available USB devices appears in the right part of the window.
From the list of available USB devices, choose the one you want to pass to the guest. Confirm with . The new USB device appears in the left pane of the view.
To remove the host USB device assignment, click it in the left pane of the view and confirm with .
virsh #
To assign a USB device to VM Guest using virsh,
follow these steps:
Identify the host USB device to assign to the guest:
tux >sudolsusb[...] Bus 001 Device 003: ID 0557:2221 ATEN International Co., Ltd Winbond Hermon [...]
Note down the vendor and product IDs. In our example, the vendor ID is
0557 and the product ID is 2221.
Run virsh edit on your domain, and add the
following device entry in the <devices>
section using the values from the previous step:
<hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0557'/> <product id='2221'/> </source> </hostdev>
Instead of defining the host device with <vendor/> and <product/> IDs, you
can use the <address/> element as described for host PCI devices in Section 13.10.2, “Adding a PCI Device with virsh”.
Shut down the VM Guest and restart it to make the assigned USB device available.
If you are running SELinux on your VM Host Server, you need to disable it prior to starting the VM Guest with
tux > setsebool -P virt_use_sysfs 1Single Root I/O Virtualization (SR-IOV) capable PCIe devices can replicate their resources, so they appear to be multiple devices. Each of these "pseudo-devices" can be assigned to a VM Guest.
SR-IOV is an industry specification that was created by the Peripheral Component Interconnect Special Interest Group (PCI-SIG) consortium. It introduces physical functions (PF) and virtual functions (VF). PFs are full PCIe functions used to manage and configure the device. PFs also can move data. VFs lack the configuration and management part—they only can move data and a reduced set of configuration functions. Since VFs do not have all PCIe functions, the host operating system or the Hypervisor must support SR-IOV to be able to access and initialize VFs. The theoretical maximum for VFs is 256 per device (consequently the maximum for a dual-port Ethernet card would be 512). In practice this maximum is much lower, since each VF consumes resources.
The following requirements must be met to be able to use SR-IOV:
An SR-IOV-capable network card (as of openSUSE Leap 42.3, only network cards support SR-IOV)
An AMD64/Intel 64 host supporting hardware virtualization (AMD-V or Intel VT-x)
A chipset that supports device assignment (AMD-Vi or Intel VT-d)
libvirt-0.9.10 or better
SR-IOV drivers must be loaded and configured on the host system
A host configuration that meets the requirements listed at Important: Requirements for VFIO and SR-IOV
A list of the PCI addresses of the VF(s) that will be assigned to VM Guests
The information whether a device is SR-IOV-capable can be obtained from
its PCI descriptor by running lspci. A device that
supports SR-IOV reports a capability similar to
the following:
Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
Before adding an SR-IOV device to a VM Guest when initially setting it up, the VM Host Server already needs to be configured as described in Section 13.12.2, “Loading and Configuring the SR-IOV Host Drivers”.
To be able to access and initialize VFs, an SR-IOV-capable driver needs to be loaded on the host system.
Before loading the driver, make sure the card is properly detected by
running lspci. The following example shows the
lspci output for the dual-port Intel 82576NS
network card:
tux >sudo/sbin/lspci | grep 82576 01:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01)
In case the card is not detected, it is likely that the hardware virtualization support in the BIOS/EFI has not been enabled.
Check whether the SR-IOV driver is already
loaded by running lsmod. In the following example a
check for the igb driver (for the Intel 82576NS network card) returns
a result. That means the driver is already loaded. If the command
returns nothing, the driver is not loaded.
tux >sudo/sbin/lsmod | egrep "^igb " igb 185649 0
Skip this step if the driver is already loaded.
If the SR-IOV driver is not yet loaded, the
non-SR-IOV driver needs to be removed first,
before loading the new driver. Use rmmod to unload
a driver. The following example unloads the
non-SR-IOV driver for the Intel 82576NS network
card:
tux >sudo/sbin/rmmod igbvf
Load the SR-IOV driver subsequently using
the modprobe command—the VF parameter
(max_vfs) is mandatory:
tux >sudo/sbin/modprobe igb max_vfs=8
Or load the driver via SYSFS:
Find the PCI ID of the physical NIC by listing Ethernet devices:
tux >sudolspci | grep Eth 06:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 06:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10)
To enable VFs, echo the number of desired VFs to load to the
sriov_numvfs parameter:
tux >sudoecho 1 > /sys/bus/pci/devices/0000:06:00.1/sriov_numvfs
Verify that the VF NIC was loaded:
tux >sudolspci | grep Eth 06:00.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 06:00.1 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10) 06:08.0 Ethernet controller: Emulex Corporation OneConnect NIC (Skyhawk) (rev 10)
Obtain the maximum number of VFs available:
tux >sudolspci -vvv -s 06:00.1 | grep 'Initial VFs' Initial VFs: 32, Total VFs: 32, Number of VFs: 0, Function Dependency Link: 01
Create a before.service file which loads VF via
SYSFS on boot:
[Unit] Before= [Service] Type=oneshot RemainAfterExit=true ExecStart=/bin/bash -c "echo 1 > /sys/bus/pci/devices/0000:06:00.1/sriov_numvfs" # beware, executable is run directly, not through a shell, check the man pages # systemd.service and systemd.unit for full syntax [Install] # target in which to start the service WantedBy=multi-user.target #WantedBy=graphical.target
And copy it to /etc/systemd/system.
Additionally, it is required to create another service file
(after-local.service) pointing to
/etc/init.d/after.local script that detaches the
NIC prior to starting the VM, otherwise the VM would fail to start:
[Unit] Description=/etc/init.d/after.local Compatibility After=libvirtd.service Requires=libvirtd.service [Service] Type=oneshot ExecStart=/etc/init.d/after.local RemainAfterExit=true [Install] WantedBy=multi-user.target
And copy it to /etc/systemd/system.
#! /bin/sh # # Copyright (c) 2010 SuSE LINUX Products GmbH, Germany. All rights reserved. # ... virsh nodedev-detach pci_0000_06_08_0
Then save it as /etc/init.d/after.local.
Reboot the machine and check if the SR-IOV driver is loaded by
re-running the lspci command from the first step of
this procedure. If the SR-IOV driver was loaded successfully you
should see additional lines for the VFs:
01:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 01:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 01:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 01:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) [...] 04:00.0 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82576NS Gigabit Network Connection (rev 01) 04:10.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 04:10.1 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) 04:10.2 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) [...]
When the SR-IOV hardware is properly set up on the VM Host Server, you can add VFs to VM Guests. To do so, you need to collect some data first.
Note: The following procedure is using example data. Make sure to replace it by appropriate data from your setup.
Use the virsh nodedev-list command to get the PCI
address of the VF you want to assign and its corresponding PF.
Numerical values from the lspci output shown in
Section 13.12.2, “Loading and Configuring the SR-IOV Host Drivers” (for example
01:00.0 or 04:00.1) are
transformed by adding the prefix "pci_0000_" and by replacing colons
and dots with underscores. So a PCI ID listed as "04:00.0" by
lspci is listed as "pci_0000_04_00_0" by virsh. The
following example lists the PCI IDs for the second port of the Intel
82576NS network card:
tux >sudovirsh nodedev-list | grep 0000_04_ pci_0000_04_00_0 pci_0000_04_00_1 pci_0000_04_10_0 pci_0000_04_10_1 pci_0000_04_10_2 pci_0000_04_10_3 pci_0000_04_10_4 pci_0000_04_10_5 pci_0000_04_10_6 pci_0000_04_10_7 pci_0000_04_11_0 pci_0000_04_11_1 pci_0000_04_11_2 pci_0000_04_11_3 pci_0000_04_11_4 pci_0000_04_11_5
The first two entries represent the PFs, whereas the other entries represent the VFs.
Get more data that will be needed by running the command
virsh nodedev-dumpxml on the PCI ID of the VF you
want to add:
tux >sudovirsh nodedev-dumpxml pci_0000_04_10_0 <device> <name>pci_0000_04_10_0</name> <parent>pci_0000_00_02_0</parent> <capability type='pci'> <domain>0</domain> <bus>4</bus> <slot>16</slot> <function>0</function> <product id='0x10ca'>82576 Virtual Function</product> <vendor id='0x8086'>Intel Corporation</vendor> <capability type='phys_function'> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </capability> </capability> </device>
The following data is needed for the next step:
<domain>0</domain>
<bus>4</bus>
<slot>16</slot>
<function>0</function>
Create a temporary XML file (for example
/tmp/vf-interface.xml containing the data
necessary to add a VF network device to an existing VM Guest. The
minimal content of the file needs to look like the following:
<interface type='hostdev'>1 <source> <address type='pci' domain='0' bus='11' slot='16' function='0'2/>2 </source> </interface>
VFs do not get a fixed MAC address; it changes every time the host reboots. When adding network devices the “traditional” way with <hostdev>, it would require to reconfigure the VM Guest's network device after each reboot of the host, because of the MAC address change. To avoid this kind of problem, libvirt introduced the “interface type='hostdev'” directive, which sets up network-specific data before assigning the device. | |
Specify the data you acquired in the previous step here. |
In case a device is already attached to the host, it cannot be attached to a guest. To make it available for guests, detach it from the host first:
tux > virsh nodedev-detach pci_0000_04_10_0Last, add the VF interface to an existing VM Guest:
tux > virsh attach-device GUEST /tmp/vf-interface.xml --OPTIONGUEST needs to be replaced by the domain name, id or uuid of the VM Guest and --OPTION can be one of the following:
--persistent
This option will always add the device to the domain's persistent XML. In addition, if the domain is running, it will be hotplugged.
--config
This option will only affect the persistent XML, even if the domain is running. The device will only show up in the guest on next boot.
--live
This option will only affect a running domain. If the domain is inactive, the operation will fail. The device is not persisted in the XML and will not be available in the guest on next boot.
--currentThis option affects the current state of the domain. If the domain is inactive, the device is added to the persistent XML and will be available on next boot. If the domain is active, the device is hotplugged but not added to the persistent XML.
To detach a VF interface, use the virsh
detach-device command, which also takes the options listed
above.
If you define the PCI address of a VF into a guest's configuration statically as described in Section 13.12.3, “Adding a VF Network Device to an Existing VM Guest”, it is hard to migrate such guest to another host. The host must have identical hardware in the same location on the PCI bus, or the guest configuration must be modified prior to each start.
Another approach is to create a libvirt network with a device pool
that contains all the VFs of an SR-IOV device.
The guest then references this network, and each time it is started, a
single VF is dynamically allocated to it. When the guest is stopped, the
VF is returned to the pool, available for another guest.
The following example of network definition creates a pool of all VFs for the SR-IOV device with its physical function (PF) at the network interface eth0 on the host:
<network>
<name>passthrough</name>
<forward mode='hostdev' managed='yes'>
<pf dev='eth0'/>
</forward>
</network>
To use this network on the host, save the above code to a file, for
example /tmp/passthrough.xml, and execute the
following commands. Remember to replace eth0 with the real network
interface name of your SR-IOV device's PF:
tux >virsh net-define /tmp/passthrough.xmltux >virsh net-autostart passthroughtux >virsh net-start passthrough
The following example of guest device interface definition uses a VF of
the SR-IOV device from the pool created in
Section 13.12.4.1, “Defining Network with Pool of VFs on VM Host Server”. libvirt automatically
derives the list of all VFs associated with that PF the first time the
guest is started.
<interface type='network'> <source network='passthrough'> </interface>
To verify the list of associated VFs, run virsh net-dumpxml
passthrough on the host after the first guest that uses the
network with the pool of VFs starts.
<network connections='1'>
<name>passthrough</name>
<uuid>a6a26429-d483-d4ed-3465-4436ac786437</uuid>
<forward mode='hostdev' managed='yes'>
<pf dev='eth0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x1'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x3'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x5'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x7'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x1'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x3'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x5'/>
</forward>
</network>Macvtap provides direct attachment of a VM Guest virtual interface to a host network interface. The macvtap-based interface extends the VM Host Server network interface and has its own MAC address on the same Ethernet segment. Typically, this is used to make both the VM Guest and the VM Host Server show up directly on the switch that the VM Host Server is connected to.
Macvtap cannot be used with network interfaces already connected to a linux bridge. Before attempting to create the macvtap interface, remove the interface from the bridge.
When using macvtap, a VM Guest can communicate with other VM Guests, and with other external hosts on the network. But it cannot communicate with the VM Host Server on which the VM Guest runs. This is the defined behavior of macvtap, because of the way the VM Host Server's physical Ethernet is attached to the macvtap bridge. Traffic from the VM Guest into that bridge that is forwarded to the physical interface cannot be bounced back up to the VM Host Server's IP stack. Similarly, traffic from the VM Host Server's IP stack that is sent to the physical interface cannot be bounced back up to the macvtap bridge for forwarding to the VM Guest.
Virtual network interfaces based on macvtap are supported by libvirt
by specifying an interface type of direct. For example:
<interface type='direct'> <mac address='aa:bb:cc:dd:ee:ff'/> <source dev='eth0' mode='bridge'/> <model type='virtio'/> </interface>
The operation mode of the macvtap device can be controlled with
the mode attribute. The following lists shows its possible
values and a description for each:
vepa: All VM Guest packets are sent to an external bridge. Packets
whose destination is a VM Guest on the same VM Host Server as where the
packet originates from are sent back to the VM Host Server by the VEPA
capable bridge (today's bridges are typically not VEPA capable).
bridge: Packets whose destination is on the same VM Host Server where
they originate from are directly delivered to the target macvtap
device. Both origin and destination devices need to be in bridge
mode for direct delivery. If either one of them is in vepa mode, a
VEPA capable bridge is required.
private: All packets are sent to the external bridge and will only
be delivered to a target VM Guest on the same VM Host Server if they are
sent through an external router or gateway and that device sends
them back to the VM Host Server. This procedure is followed if either the
source or destination device is in private mode.
passthrough: A special mode that gives more power to the network
interface. All packets will be forwarded to the interface, allowing
virtio VM Guests to change the MAC address or set promiscuous mode
to bridge the interface or create VLAN interfaces on top
of it. Note that a network interface is not shareable in passthrough
mode. Assigning an interface to a VM Guest will disconnect it from
the VM Host Server. For this reason SR-IOV virtual functions are often
assigned to the VM Guest in passthrough mode.
Keeping the correct time in a VM Guest is one of the more difficult aspects of virtualization. Keeping the correct time is especially important for network applications and is also a prerequisite to do a live migration of a VM Guest.
Virtual Machines consist of disk images and definition files. Manually accessing and manipulating these guest components (outside of normal hypervisor processes) is possible, but inherently dangerous and risks compromising data integrity. libguestfs is a C library and a corresponding set of tools designed for safely accessing and modifying Virtual Machine disk images—outside of normal hypervisor processes, but without the risk normally associated with manual editing.
Hypervisors allow for various storage caching strategies to be specified when configuring a VM Guest. Each guest disk interface can have one of the following cache modes specified: writethrough, writeback, none, directsync, or unsafe. If no cache mode is specified, an appropriate default cache mode is used. These cache modes influence how host-based storage is accessed, as follows:
Read/write data may be cached in the host page cache.
The guest's storage controller is informed whether a write cache is present, allowing for the use of a flush command.
Synchronous write mode may be used, in which write requests are reported complete only when committed to the storage device.
Flush commands (generated by the guest storage controller) may be ignored for performance reasons.
If a disorderly disconnection between the guest and its storage occurs, the cache mode in use will affect whether data loss occurs. The cache mode can also affect disk performance significantly. Additionally, some cache modes are incompatible with live migration, depending on several factors. There are no simple rules about what combination of cache mode, disk image format, image placement, or storage sub-system is best. The user should plan each guest's configuration carefully and experiment with various configurations to determine the optimal performance.
In older QEMU versions, not specifying a cache mode meant that
writethrough would be used as the default. With
modern versions—as shipped with openSUSE Leap—the various
guest storage interfaces have been fixed to handle
writeback or writethrough
semantics more correctly. This allows for the default caching mode to be
switched to writeback. The guest driver for each of
ide, scsi, and
virtio have within their power to disable the write
back cache, causing the caching mode used to revert to
writethrough. The typical guest's storage drivers
will maintain the default caching mode as writeback,
however.
This mode causes the hypervisor to interact with the disk image file or
block device with O_DSYNC semantics. Writes are reported
as completed only when the data has been committed
to the storage device. The host page cache is used in what can be
termed a writethrough caching mode. The guest's virtual storage
adapter is informed that there is no writeback cache, so the guest
would not need to send down flush commands to manage data integrity.
The storage behaves as if there is a writethrough cache.
This mode causes the hypervisor to interact with the disk image file or
block device with neither O_DSYNC nor
O_DIRECT semantics. The host page cache is used and
writes are reported to the guest as completed when they are placed in the
host page cache. The
normal page cache management will handle commitment to the storage
device. Additionally, the guest's virtual storage adapter is informed
of the writeback cache, so the guest would be expected to send down
flush commands as needed to manage data integrity. Analogous to a raid
controller with RAM cache.
This mode causes the hypervisor to interact with
the disk image file or block device with
O_DIRECT semantics. The host page cache is bypassed
and I/O happens directly between the hypervisor user space buffers and the
storage
device. Because the actual storage device may report a write as
completed when placed in its write queue only, the guest's virtual
storage adapter is informed that there is a writeback cache. The
guest would be expected to send down flush commands as needed to
manage data integrity. Performance-wise, it is equivalent to direct
access to your host's disk.
This mode is similar to the writeback mode
discussed above. The key aspect of this “unsafe” mode, is
that all flush commands from the guests are ignored. Using this mode
implies that the user has accepted the trade-off of performance over
risk of data loss in case of a host failure. Useful, for example,
during guest installation, but not for production workloads.
This mode causes the hypervisor to interact with the disk image file or
block device with both O_DSYNC and
O_DIRECT semantics. This means, writes are reported as
completed only when the data has been committed to the storage device,
and when it is also desirable to bypass the host page cache. Like
writethrough, it is helpful to guests that do
not send flushes when needed. It was the last cache mode added,
completing the possible combinations of caching and direct access
semantics.
These are the safest modes, and considered equally safe, given that
the guest operating system is “modern and well behaved”,
which means that it uses flushes as needed. If you have a suspect
guest, use writethough, or
directsync. Note that some file systems are not
compatible with none or
directsync, as they do not support O_DIRECT,
which these cache modes rely on.
This mode informs the guest of the presence of a write cache, and relies on the guest to send flush commands as needed to maintain data integrity within its disk image. This is a common storage design which is completely accounted for within modern file systems. This mode exposes the guest to data loss in the unlikely case of a host failure, because there is a window of time between the time a write is reported as completed, and that write being committed to the storage device.
This mode is similar to writeback caching except for the following: the guest flush commands are ignored, nullifying the data integrity control of these flush commands, and resulting in a higher risk of data loss because of host failure. The name “unsafe” should serve as a warning that there is a much higher potential for data loss because of a host failure than with the other modes. As the guest terminates, the cached data is flushed at that time.
The choice to make full use of the page cache, or to write through it, or
to bypass it altogether can have dramatic performance implications. Other
factors that influence disk performance include the capabilities of the
actual storage system, what disk image format is used, the potential size
of the page cache and the IO scheduler used. Additionally, not flushing
the write cache increases performance, but with risk, as noted above. As
a general rule, high-end systems typically perform best with the cache mode
none, because of the reduced data copying that
occurs. The potential benefit of having multiple guests share the common
host page cache, the ratio of reads to writes, and the use of AIO mode
native (see below) should also be considered.
The caching of storage data and metadata restricts the configurations
that support live migration. Currently, only raw,
qcow2 and qed image formats can be
used for live migration. If a clustered file system is used, all cache
modes support live migration. Otherwise the only cache mode that supports
live migration on read/write shared storage is none.
The libvirt management layer includes checks for
migration compatibility based on several factors. If the guest
storage is hosted on a clustered file system, is read-only or is marked
shareable, then the cache mode is ignored when determining if migration
can be allowed. Otherwise libvirt will not allow
migration unless the cache mode is set to none.
However, this restriction can be overridden with the
“unsafe” option to the migration APIs, which is also
supported by virsh, as for example in
tux > virsh migrate --live --unsafe
The cache mode none is required for the AIO mode setting
native. If another cache mode is used, then the
AIO mode will silently be switched back to the default threads. The
guest flush within the host is implemented using
fdatasync().
Keeping the correct time in a VM Guest is one of the more difficult aspects of virtualization. Keeping the correct time is especially important for network applications and is also a prerequisite to do a live migration of a VM Guest.
It is strongly recommended to ensure the VM Host Server keeps the correct time as well, for example, by using NTP (see Chapter 18, Time Synchronization with NTP for more information).
kvm_clock #
KVM provides a paravirtualized clock which is supported via
the kvm_clock driver. It is strongly recommended
to use kvm_clock.
Use the following command inside a VM Guest running Linux to check
whether the driver kvm_clock has been loaded:
tux >sudodmesg | grep kvm-clock [ 0.000000] kvm-clock: cpu 0, msr 0:7d3a81, boot clock [ 0.000000] kvm-clock: cpu 0, msr 0:1206a81, primary cpu clock [ 0.012000] kvm-clock: cpu 1, msr 0:1306a81, secondary cpu clock [ 0.160082] Switching to clocksource kvm-clock
To check which clock source is currently used, run the following command
in the VM Guest. It should output kvm-clock:
tux > cat /sys/devices/system/clocksource/clocksource0/current_clocksourcekvm-clock and NTP
When using kvm-clock, it is recommended to use
NTP in the VM Guest, as well. Using NTP on the VM Host Server
is also recommended.
The paravirtualized kvm-clock is currently not for
Windows* operating systems. For Windows*, use the Windows Time
Service Tools for time synchronization (see
http://technet.microsoft.com/en-us/library/cc773263%28WS.10%29.aspx
for more information).
When booting, virtual machines get their initial clock time from their host. After getting their initial clock time, fully virtual machines manage their time independently from the host. Paravirtual machines manage clock time according to their independent wallclock setting. If the independent wallclock is enabled, the virtual machine manages its time independently and does not synchronize with the host. If the independent wallclock is disabled, the virtual machine periodically synchronizes its time with the host clock.
If a guest operating system is configured for NTP and the virtual
machine's independent wallclock setting is disabled, it will still
periodically synchronize its time with the host time. This dual type of
configuration can result in time drift between virtual machines that need
to be synchronized. To effectively use an external time source, such as
NTP, for time synchronization on a virtual machine, the virtual machine's
independent wallclock setting must be enabled (set to
1). Otherwise, it will continue to synchronize its
time with its host.
Log in to the virtual machine’s operating system as
root.
In the virtual machine environment, enter
tux >cat /proc/sys/xen/independent_wallclock
0 means that the virtual machine is getting its
time from the host and is not using independent wallclock.
1 means that the virtual machine is using
independent wallclock and managing its time independently from the
host.
Log in to the virtual machine environment as
root.
Edit the virtual machine’s /etc/sysctl.conf
file.
Add or change the following entry:
xen.independent_wallclock=1
Enter 1 to enable or 0 to disable
the wallclock setting.
Save the file and reboot the virtual machine operating system.
While booting, a virtual machine gets its initial clock time from the
host. Then, if the wallclock setting is set to 1 in the
sysctl.conf file, it manages its clock time
independently and does not synchronize with the host clock time.
Log in to the virtual machine environment as
root.
Enter the following command:
root # echo "1" > /proc/sys/xen/independent_wallclock
Enter 1 to enable or 0 to disable
the wallclock setting.
Add or change the following entry:
xen.independent_wallclock=1
Enter 1 to enable or 0 to disable
the wallclock setting.
Although the current status of the independent wallclock changes
immediately, its clock time might not be immediately synchronized. The
setting persists until the virtual machine reboots. Then, it gets its
initial clock time from the host and uses the independent wallclock
according to the setting specified in the
sysctl.conf file.
Virtual Machines consist of disk images and definition files. Manually accessing and manipulating these guest components (outside of normal hypervisor processes) is possible, but inherently dangerous and risks compromising data integrity. libguestfs is a C library and a corresponding set of tools designed for safely accessing and modifying Virtual Machine disk images—outside of normal hypervisor processes, but without the risk normally associated with manual editing.
As disk images and definition files are simply another type of file in a Linux environment, it is possible to use many tools to access, edit and write to these files. When used correctly, such tools can be an important part of guest administration. However, even correct usage of these tools is not without risk. Risks that should be considered when manually manipulating guest disk images include:
Data Corruption: Concurrently accessing images, by the host machine or another node in a cluster, can cause changes to be lost or data corruption to occur if virtualization protection layers are bypassed.
Security: Mounting disk images as loop devices requires root access. While an image is loop mounted, other users and processes can potentially access the disk contents.
Administrator Error: Bypassing virtualization layers correctly requires advanced understanding of virtual components and tools. Failing to isolate the images or failing to clean up properly after changes have been made can lead to further problems once back in virtualization control.
libguestfs C library has been designed to safely and securely create, access and modify virtual machine (VM Guest) disk images. It also provides additional language bindings: for Perl, Python, PHP (only for 64-bit machines), and Ruby. libguestfs can access VM Guest disk images without needing root and with multiple layers of defense against rogue disk images.
libguestfs provides many tools designed for accessing and modifying VM Guest disk images and contents. These tools provide such capabilities as: viewing and editing files inside guests, scripting changes to VM Guests, monitoring disk used/free statistics, creating guests, doing V2V or P2V migrations, performing backups, cloning VM Guests, formatting disks, and resizing disks.
You must not use libguestfs tools on live virtual machines. Doing so will probably result in disk corruption in the VM Guest. libguestfs tools try to stop you from doing this, but cannot catch all cases.
However most command have the --ro (read-only) option.
With this option, you can attach a command to a live virtual machine.
The results might be strange or inconsistent at times but you will not
risk disk corruption.
libguestfs is shipped through 4 packages:
libguestfs0: which provides
the main C library
guestfs-data: which contains
the appliance files used when launching images (stored in
/usr/lib64/guestfs)
guestfs-tools: the core guestfs
tools, man pages, and the /etc/libguestfs-tools.conf
configuration file.
guestfs-winsupport: provides
support for Windows file guests in the guestfs tools. This package only
needs to be installed to handle Windows guests, for example when
converting a Windows guest to KVM.
To install guestfs tools on your system run:
tux >sudozypper in guestfs-tools
The set of tools found within the guestfs-tools package is used for
accessing and modifying virtual machine disk images. This functionality
is provided through a familiar shell interface with built-in safeguards
which ensure image integrity. Guestfs tools shells expose all
capabilities of the guestfs API, and create an appliance on the fly
using the packages installed on the machine and the files found in
/usr/lib4/guestfs.
Guestfs tools support various file systems including:
Ext2, Ext3, Ext4
Xfs
Btrfs
Multiple disk image formats are also supported:
raw
qcow2
Guestfs may also support Windows* file systems (VFAT, NTFS), BSD* and Apple* file systems, and other disk image formats (VMDK, VHDX...). However, these file systems and disk image formats are unsupported on SUSE Linux Enterprise.
virt-rescue #
virt-rescue is similar to a rescue CD, but for
virtual machines, and without the need for a CD. virt-rescue presents
users with a rescue shell and some simple recovery tools which can be
used to examine and correct problems within a virtual machine or disk
image.
tux > virt-rescue -a sles.qcow2
Welcome to virt-rescue, the libguestfs rescue shell.
Note: The contents of / are the rescue appliance.
You need to mount the guest's partitions under /sysroot
before you can examine them. A helper script for that exists:
mount-rootfs-and-do-chroot.sh /dev/sda2
><rescue>
[ 67.194384] EXT4-fs (sda1): mounting ext3 file system
using the ext4 subsystem
[ 67.199292] EXT4-fs (sda1): mounted filesystem with ordered data
mode. Opts: (null)
mount: /dev/sda1 mounted on /sysroot.
mount: /dev bound on /sysroot/dev.
mount: /dev/pts bound on /sysroot/dev/pts.
mount: /proc bound on /sysroot/proc.
mount: /sys bound on /sysroot/sys.
Directory: /root
Thu Jun 5 13:20:51 UTC 2014
(none):~ #You are now running the VM Guest in rescue mode:
(none):~ # cat /etc/fstab devpts /dev/pts devpts mode=0620,gid=5 0 0 proc /proc proc defaults 0 0 sysfs /sys sysfs noauto 0 0 debugfs /sys/kernel/debug debugfs noauto 0 0 usbfs /proc/bus/usb usbfs noauto 0 0 tmpfs /run tmpfs noauto 0 0 /dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part1 / ext3 defaults 1 1
virt-resize #
virt-resize is used to resize a virtual machine disk,
making it larger or smaller overall, and resizing or deleting any
partitions contained within.
Full step-by-step example: How to expand a virtual machine disk
First, with virtual machine powered off, determine the size of the partitions available on this virtual machine:
tux > virt-filesystems --long --parts --blkdevs -h -a sles.qcow2
Name Type MBR Size Parent
/dev/sda1 partition 83 16G /dev/sda
/dev/sda device - 16G -
virt-resize cannot do in-place disk
modifications—there must be sufficient space to store the
resized output disk. Use the truncate command to
create a file of suitable size:
tux > truncate -s 32G outdisk.img
Use virt-resize to resize the disk image.
virt-resize requires two mandatory parameters for
the input and output images:
tux > virt-resize --expand /dev/sda1 sles.qcow2 outdisk.img
Examining sles.qcow2 ...
**********
Summary of changes:
/dev/sda1: This partition will be resized from 16,0G to 32,0G. The
filesystem ext3 on /dev/sda1 will be expanded using the 'resize2fs'
method.
**********
Setting up initial partition table on outdisk.img ...
Copying /dev/sda1 ...
◐ 84%
⟦▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒════════⟧ 00:03
Expanding /dev/sda1 using the 'resize2fs' method ...
Resize operation completed with no errors. Before deleting the old
disk, carefully check that the resized disk boots and works correctly.Confirm the image was resized properly:
tux > virt-filesystems --long --parts --blkdevs -h -a outdisk.img
Name Type MBR Size Parent
/dev/sda1 partition 83 32G /dev/sda
/dev/sda device - 32G -Bring up the VM Guest using the new disk image and confirm correct operation before deleting the old image.
There are guestfs tools to simplify administrative tasks—such as viewing and editing files, or obtaining information on the virtual machine.
virt-filesystems #This tool is used to report information regarding file systems, partitions, and logical volumes in a disk image or virtual machine.
tux > virt-filesystems -l -a sles.qcow2
Name Type VFS Label Size Parent
/dev/sda1 filesystem ext3 - 17178820608 -virt-ls #
virt-ls lists file names, file sizes, checksums,
extended attributes and more from a virtual machine or disk image.
Multiple directory names can be given, in which case the output from
each is concatenated. To list directories from a libvirt guest, use the
-d option to specify the name of the guest. For a disk
image, use the -a option.
tux > virt-ls -h -lR -a sles.qcow2 /var/log/
d 0755 776 /var/log
- 0640 0 /var/log/NetworkManager
- 0644 23K /var/log/Xorg.0.log
- 0644 23K /var/log/Xorg.0.log.old
d 0700 482 /var/log/YaST2
- 0644 512 /var/log/YaST2/_dev_vda
- 0644 59 /var/log/YaST2/arch.info
- 0644 473 /var/log/YaST2/config_diff_2017_05_03.log
- 0644 5.1K /var/log/YaST2/curl_log
- 0644 1.5K /var/log/YaST2/disk_vda.info
- 0644 1.4K /var/log/YaST2/disk_vda.info-1
[...]virt-cat #
virt-cat is a command line tool to display the
contents of a file that exists in the named virtual machine (or disk
image). Multiple file names can be given, in which case they are
concatenated together. Each file name must be a full path, starting at
the root directory (starting with '/').
tux > virt-cat -a sles.qcow2 /etc/fstab
devpts /dev/pts devpts mode=0620,gid=5 0 0
proc /proc proc defaults 0 0virt-df #
virt-df is a command line tool to display free space
on virtual machine file systems. Unlike other tools, it does not only
display the size of disk allocated to a virtual machine, but can look
inside disk images to show how much space is actually being used.
tux > virt-df -a sles.qcow2
Filesystem 1K-blocks Used Available Use%
sles.qcow2:/dev/sda1 16381864 520564 15022492 4%virt-edit #
virt-edit is a command line tool capable of editing
files that reside in the named virtual machine (or disk image).
virt-tar-in/out #
virt-tar-in unpacks an uncompressed TAR archive into a
virtual machine disk image or named libvirt domain.
virt-tar-out packs a virtual machine disk image
directory into a TAR archive.
tux > virt-tar-out -a sles.qcow2 /home homes.tarvirt-copy-in/out #
virt-copy-in copies files and directories from the
local disk into a virtual machine disk image or named libvirt domain.
virt-copy-out copies files and directories out of a
virtual machine disk image or named libvirt domain.
tux > virt-copy-in -a sles.qcow2 data.tar /tmp/
virt-ls -a sles.qcow2 /tmp/
.ICE-unix
.X11-unix
data.tarvirt-log #
virt-log shows the log files of the named libvirt
domain, virtual machine or disk image. If the package
guestfs-winsupport is installed
it can also show the event log of a Windows virtual machine disk image.
tux > virt-log -a windows8.qcow2
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<Events>
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"><System><Provider Name="EventLog"></Provider>
<EventID Qualifiers="32768">6011</EventID>
<Level>4</Level>
<Task>0</Task>
<Keywords>0x0080000000000000</Keywords>
<TimeCreated SystemTime="2014-09-12 05:47:21"></TimeCreated>
<EventRecordID>1</EventRecordID>
<Channel>System</Channel>
<Computer>windows-uj49s6b</Computer>
<Security UserID=""></Security>
</System>
<EventData><Data><string>WINDOWS-UJ49S6B</string>
<string>WIN-KG190623QG4</string>
</Data>
<Binary></Binary>
</EventData>
</Event>
...guestfish #
guestfish is a shell and command line tool for
examining and modifying virtual machine file systems. It uses libguestfs
and exposes all of the functionality of the guestfs API.
Examples of usage:
tux > guestfish -a disk.img <<EOF
run
list-filesystems
EOFguestfish
Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.
Type: 'help' for help on commands
'man' to read the manual
'quit' to quit the shell
><fs> add sles.qcow2
><fs> run
><fs> list-filesystems
/dev/sda1: ext3
><fs> mount /dev/sda1 /
cat /etc/fstab
devpts /dev/pts devpts mode=0620,gid=5 0 0
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
tmpfs /run tmpfs noauto 0 0
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00001-part1 / ext3 defaults 1 1Libguestfs provides tools to help converting Xen virtual machines or physical machines into KVM guests. The following section will cover a special use case: converting a bare metal machine into a KVM one.
Converting a physical machine into a KVM one is not yet supported in openSUSE Leap. This feature is released as a technology preview only.
Converting a physical machine requires collecting information about
it and transmitting this to a conversion server. This is achieved by
running a live system prepared with virt-p2v and
kiwi tools on the machine.
Install the needed packages with the command:
tux >sudozypper in virt-p2v kiwi-desc-isoboot
These steps will document how to create an ISO image to create a
bootable DVD. Alternatively, you can create a PXE boot image
instead; for more information about building PXE images with
KIWI, see man virt-p2v-make-kiwi.
Create a KIWI configuration:
tux > virt-p2v-make-kiwi -o /tmp/p2v.kiwi
The -o defines where to create the KIWI configuration.
Edit the config.xml file in the generated
configuration if needed. For example, in
config.xml adjust the keyboard layout of the
live system.
Build the ISO image with kiwi:
tux > kiwi --build /tmp/p2v.kiwi1 \
-d /tmp/build2 \
--ignore-repos \
--add-repo http://URL/TO/SLE/REPOSITORIES3 \
--type isoBurn the ISO on a DVD or a USB stick. With such a medium, boot the machine to be converted.
After the system is started, you will be asked for the connection
details of the conversion server. This server
is a machine with the virt-v2v package installed.
If the network setup is more complex than a DHCP client, click the button to open the YaST network configuration dialog.
Click the button to allow moving to the next page of the wizard.
Select the disks and network interfaces to be converted and define the VM data like the amount of allocated CPUs, memory and the Virtual Machine name.
If not defined, the created disk image format will be raw by default. This can be changed by entering the desired format in the field.
There are two possibilities to generate the virtual machine:
either using the local or the
libvirt output. The first one will place the
Virtual Machine disk image and configuration in the path defined
in the field. These can then be
used to define a new libvirt-handled guest using
virsh. The second method will create a new
libvirt-handled guest with the disk image placed in the pool
defined in the field.
Click to start it.
When using the guestfs tools on an image with Btrfs root partition (the default with openSUSE Leap) the following error message may be displayed:
tux > virt-ls -a /path/to/sles12sp2.qcow2 /
virt-ls: multi-boot operating systems are not supported
If using guestfish '-i' option, remove this option and instead
use the commands 'run' followed by 'list-filesystems'.
You can then mount filesystems you want by hand using the
'mount' or 'mount-ro' command.
If using guestmount '-i', remove this option and choose the
filesystem(s) you want to see by manually adding '-m' option(s).
Use 'virt-filesystems' to see what filesystems are available.
If using other virt tools, multi-boot operating systems won't work
with these tools. Use the guestfish equivalent commands
(see the virt tool manual page).
This is usually caused by the presence of snapshots in the guests. In this
case guestfs does not know which snapshot to bootstrap. To force the
use of a snapshot, use the -m parameter as follows:
tux > virt-ls -m /dev/sda2:/:subvol=@/.snapshots/2/snapshot -a /path/to/sles12sp2.qcow2 /When troubleshooting problems within a libguestfs appliance, the environment variable LIBGUESTFS_DEBUG=1 can be used to enable debug messages. To output each command/API call in a format that is similar to guestfish commands, use the environment variable LIBGUESTFS_TRACE=1.
libguestfs-test-tool #
libguestfs-test-tool is a test program that checks if
basic libguestfs functionality is working. It will print a large amount
of diagnostic messages and details of the guestfs environment, then
create a test image and try to start it. If it runs to completion
successfully, the following message should be seen near the end:
===== TEST FINISHED OK =====
This section documents how to set up and use openSUSE Leap 42.3 as a virtual machine host.
A VM Guest system needs some means to communicate either with other VM Guest systems or with a local network. The network interface to the VM Guest system is made of a split device driver, which means that any virtual Ethernet device has a corresponding network interface in Dom0. This interface is s…
Apart from using the recommended libvirt library
(Part II, “Managing Virtual Machines with libvirt”), you can manage Xen guest
domains with the xl tool from the command line.
The documentation in this section, describes advanced management tasks and configuration options that might help technology innovators implement leading-edge virtualization solutions. It is provided as a courtesy and does not imply that all documented options and tasks are supported by Novell, Inc.
This section introduces basic information about XenStore, its role in the Xen environment, the directory structure of files used by XenStore, and the description of XenStore's commands.
Setting up two Xen hosts as a failover system has several advantages compared to a setup where every server runs on dedicated hardware.
This section documents how to set up and use openSUSE Leap 42.3 as a virtual machine host.
Usually, the hardware requirements for the Dom0 are the same as those for the openSUSE Leap operating system. Additional CPU, disk, memory, and network resources should be added to accommodate the resource demands of all planned VM Guest systems.
Remember that VM Guest systems, like physical machines, perform better when they run on faster processors and have access to more system memory.
The virtual machine host requires several software packages and their dependencies to be installed. To install all necessary packages, run YaST , select › and choose for installation. The installation can also be performed with YaST using the module › .
After the Xen software is installed, restart the computer and, on the boot screen, choose the newly added option with the Xen kernel.
Updates are available through your update channel. To be sure to have the latest updates installed, run YaST after the installation has finished.
When installing and configuring the openSUSE Leap operating system on the host, be aware of the following best practices and suggestions:
If the host should always run as Xen host, run YaST › and activate the Xen boot entry as default boot section.
In YaST, click .
Change the default boot to the label, then click .
Click .
For best performance, only the applications and processes required for virtualization should be installed on the virtual machine host.
When using both iSCSI and OCFS2 to host Xen images, the latency required
for OCFS2 default timeouts in openSUSE Leap may not be met. To reconfigure
this timeout, run systemctl configure o2cb or edit
O2CB_HEARTBEAT_THRESHOLD in the system configuration.
If you intend to use a watchdog device attached to the Xen host, use only one at a time. It is recommended to use a driver with actual hardware integration over a generic software one.
The Dom0 kernel is running virtualized, so tools like
irqbalance or lscpu will not reflect
the real hardware characteristics.
In a default Xen installation, a small percentage of system memory is reserved for the hypervisor, and all remaining memory is automatically allocated to Dom0. When virtual machines are created, memory is ballooned out of Dom0 to provide memory for the virtual machine. This process is called "autoballooning".
SUSE recommends disabling autoballooning and configuring Dom0 with adequate memory. Generally 10 percent of the total system memory is sufficient, with a minimum of 1 GiB and a maximum of 64 GiB.
The amount of memory reserved for Dom0 is a function of the number of VM Guest(s) running on the host since Dom0 provides backend network and disk I/O services for each VM Guest. Other workloads running in Dom0 should also be considered when calculating Dom0 memory allocation. In general, memory sizing of Dom0 should be determined like any other virtual machine.
Determine memory allocation required for Dom0.
At Dom0, type xl info to view the amount of memory
that is available on the machine. Dom0's current memory allocation can
be determined with the xl list command.
Run › .
Select the Xen section.
In , add
dom0_mem=MEM_AMOUNT where
MEM_AMOUNT is the maximum amount of memory to
allocate to Dom0. Add K, M, or
G, to specify the size, for example,
dom0_mem=2G.
Restart the computer to apply the changes.
When using the XL tool stack and the dom0_mem= option
for the Xen hypervisor in GRUB 2 you need to disable xl
autoballoon in etc/xen/xl.conf.
Otherwise launching VMs will fail with errors about not being able to
balloon down Dom0. So add autoballoon=0 to
xl.conf if you have the dom0_mem=
option specified for Xen. Also see
Xen
dom0 memory
In a fully virtualized guest, the default network card is an emulated Realtek network card. However, it also possible to use the split network driver to run the communication between Dom0 and a VM Guest. By default, both interfaces are presented to the VM Guest, because the drivers of some operating systems require both to be present.
When using openSUSE Leap, only the paravirtualized network cards are available for the VM Guest by default. The following network options are available:
To use an emulated network interface like an emulated Realtek card,
specify type=ioemu in the vif
device section of the domain xl configuration. An example configuration
would look like:
vif = [ 'type=ioemu,mac=00:16:3e:5f:48:e4,bridge=br0' ]
Find more details about the xl configuration in the
xl.conf manual page man 5
xl.conf.
When you specify type=vif and do not specify a model
or type, the paravirtualized network interface is used:
vif = [ 'type=vif,mac=00:16:3e:5f:48:e4,bridge=br0,backen=0' ]
If the administrator should be offered both options, simply specify both type and model. The xl configuration would look like:
vif = [ 'type=ioemu,mac=00:16:3e:5f:48:e4,model=rtl8139,bridge=br0' ]
In this case, one of the network interfaces should be disabled on the VM Guest.
If virtualization software is correctly installed, the computer boots to display the GRUB 2 boot loader with a option on the menu. Select this option to start the virtual machine host.
In Xen, the hypervisor manages the memory resource. If you need to
reserve system memory for a recovery kernel in Dom0, this memory need to
be reserved by the hypervisor. Thus, it is necessary to add the parameter
crashkernel=size to the kernel
line instead of using the line with the other boot options.
For more information on the crashkernel parameter, see
Section 17.4, “Calculating crashkernel Allocation Size”.
If the option is not on the GRUB 2 menu, review the steps for installation and verify that the GRUB 2 boot loader has been updated. If the installation has been done without selecting the Xen pattern, run the YaST , select the filter and choose for installation.
After booting the hypervisor, the Dom0 virtual machine starts and displays its graphical desktop environment. If you did not install a graphical desktop, the command line environment appears.
Sometimes it may happen that the graphics system does not work properly. In
this case, add vga=ask to the boot parameters. To
activate permanent settings, use vga=mode-0x??? where
??? is calculated as 0x100 + VESA
mode from
http://en.wikipedia.org/wiki/VESA_BIOS_Extensions, for
example vga=mode-0x361.
Before starting to install virtual guests, make sure that the system time is correct. To do this, configure NTP (Network Time Protocol) on the controlling domain:
In YaST select › .
Select the option to automatically start the NTP daemon during boot. Provide the IP address of an existing NTP time server, then click .
Hardware clocks commonly are not very precise. All modern operating systems
try to correct the system time compared to the hardware time by means of an
additional time source. To get the correct time on all VM Guest systems,
also activate the network time services on each respective guest or make
sure that the guest uses the system time of the host. For more about
Independent Wallclocks in openSUSE Leap see
Section 15.2, “Xen Virtual Machine Clock Settings”.
For more information about managing virtual machines, see Chapter 19, Managing a Virtualization Environment.
To take full advantage of VM Guest systems, it is sometimes necessary to assign specific PCI devices to a dedicated domain. When using fully virtualized guests, this functionality is only available if the chipset of the system supports this feature, and if it is activated from the BIOS.
This feature is available from both AMD* and Intel*. For AMD machines, the feature is called IOMMU; in Intel speak, this is VT-d. Note that Intel-VT technology is not sufficient to use this feature for fully virtualized guests. To make sure that your computer supports this feature, ask your supplier specifically to deliver a system that supports PCI Pass-Through.
Some graphics drivers use highly optimized ways to access DMA. This is not supported, and thus using graphics cards may be difficult.
When accessing PCI devices behind a PCIe bridge, all of the PCI devices must be assigned to a single guest. This limitation does not apply to PCIe devices.
Guests with dedicated PCI devices cannot be migrated live to a different host.
The configuration of PCI Pass-Through is twofold. First, the hypervisor must be informed at boot time that a PCI device should be available for reassigning. Second, the PCI device must be assigned to the VM Guest.
Select a device to reassign to a VM Guest. To do this, run
lspci -k, and read the device number and the name of
the original module that is assigned to the device:
06:01.0 Ethernet controller: Intel Corporation Ethernet Connection I217-LM (rev 05)
Subsystem: Dell Device 0617
Kernel driver in use: e1000e
Kernel modules: e1000e
In this case, the PCI number is (06:01.0) and the
dependent kernel module is e1000e.
Specify a module dependency to ensure that
xen_pciback is the first module to control the
device. Add the file named
/etc/modprobe.d/50-e1000e.conf with the following
content:
install e1000e /sbin/modprobe xen_pciback ; /sbin/modprobe \ --first-time --ignore-install e1000e
Instruct the xen_pciback module to control the
device using the 'hide' option. Edit or create
/etc/modprobe.d/50-xen-pciback.conf with the
following content:
options xen_pciback hide=(06:01.0)
Reboot the system.
Check if the device is in the list of assignable devices with the command
xl pci-assignable-list
To avoid restarting the host system, you can use dynamic assignment with xl to use PCI Pass-Through.
Begin by making sure that dom0 has the pciback module loaded:
tux >sudomodprobe pciback
Then make a device assignable by using xl
pci-assignable-add. For example, to make the device
06:01.0 available for guests, run the command:
tux >sudoxl pci-assignable-add 06:01.0
There are several possibilities to dedicate a PCI device to a VM Guest:
During installation, add the pci line to the
configuration file:
pci=['06:01.0']
The command xl can be used to add or remove PCI
devices on the fly. To add the device with number
06:01.0 to a guest with name
sles12 use:
xl pci-attach sles12 06:01.0
To add the device to the guest permanently, add the following snippet to the guest configuration file:
pci = [ '06:01.0,power_mgmt=1,permissive=1' ]
After assigning the PCI device to the VM Guest, the guest system must care for the configuration and device drivers for this device.
Xen 4.0 and newer supports VGA graphics adapter pass-through on fully virtualized VM Guests. The guest can take full control of the graphics adapter with high-performance full 3D and video acceleration.
VGA Pass-Through functionality is similar to PCI Pass-Through and as such also requires IOMMU (or Intel VT-d) support from the mainboard chipset and BIOS.
Only the primary graphics adapter (the one that is used when you power on the computer) can be used with VGA Pass-Through.
VGA Pass-Through is supported only for fully virtualized guests. Paravirtual guests (PV) are not supported.
The graphics card cannot be shared between multiple VM Guests using VGA Pass-Through — you can dedicate it to one guest only.
To enable VGA Pass-Through, add the following settings to your fully virtualized guest configuration file:
gfx_passthru=1 pci=['yy:zz.n']
where yy:zz.n is the PCI controller ID of the VGA
graphics adapter as found with lspci -v on Dom0.
In some circumstances, problems may occur during the installation of the VM Guest. This section describes some known problems and their solutions.
The software I/O translation buffer allocates a large chunk of low memory early in the bootstrap process. If the requests for memory exceed the size of the buffer it usually results in a hung boot process. To check if this is the case, switch to console 10 and check the output there for a message similar to
kernel: PCI-DMA: Out of SW-IOMMU space for 32768 bytes at device 000:01:02.0
In this case you need to increase the size of the
swiotlb. Add swiotlb=128 on the
cmdline of Dom0. Note that the number can be adjusted up or down to
find the optimal size for the machine.
The swiotlb=force kernel parameter is required for DMA
access to work for PCI devices on a PV guest. For more information about
IOMMU and the swiotlb option see the file
boot-options.txt from the package
kernel-source.
There are several resources on the Internet that provide interesting information about PCI Pass-Through:
There are two methods for passing through individual host USB devices to a guest. The first is via an emulated USB device controller, the second is using PVUSB.
Before you can pass through a USB device to the VM Guest, you need to
identify it on the VM Host Server. Use the lsusb command to
list the USB devices on the host system:
root # lsusb
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 003: ID 0461:4d15 Primax Electronics, Ltd Dell Optical Mouse
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
To pass through the Dell mouse, for example, specify either the device tag
in the form vendor_id:device_id (0461:4d15) or the bus
address in the form bus.device (2.3). Remember to remove
leading zeros, otherwise xl would interpret the numbers
as octal values.
In emulated USB, the device model (QEMU) presents an emulated USB controller to the guest. The USB device is then controlled from Dom0 while USB commands are translated between the VM Guest and the host USB device. This method is only available to fully virtualized domains (HVM).
Enable the emulated USB hub with the usb=1 option. Then
specify devices among the list of devices in the config file along with
other emulated devices by using host:USBID. For example:
usb=1 usbdevice=['tablet','host:2.3','host:0424:460']
PVUSB is a new high performance method for USB Pass-Through from dom0 to the virtualized guests. PVUSB supports both USB 2.0 and USB 1.1 devices. PVUSB uses a paravirtualized front-end and back-end interfaces. With PVUSB, there are two ways to add USB devices to a guest:
via the configuration file at domain creation time
via hotplug while the VM is running
PVUSB uses a paravirtualized front- and back-end interface. PVUSB supports USB 1.1 and USB 2.0, and it works for both PV and HVM guests. To use PVUSB, you need usbfront in your guest OS, and usbback in dom0 or usb back-end in qemu. On openSUSE Leap, the usb back-end comes with qemu.
As of Xen 4.7, xl PVUSB support and hotplug support is
introduced.
In the configuration file, specify USB controllers and USB host devices
with usbctrl and usbdev. For example,
in case of HVM guests:
usbctrl=['type=qubs,version=2,ports=4', 'type=qubs,version=1,ports=4', ] usbdev=['hostbus=2, hostaddr=1, controller=0,port=1', ]
It is important to specify type=qusb for the controller
of HVM guests.
To manage hotpluggin PVUSB devices, use the
usbctrl-attach, usbctrl-detach,
usb-list, usbdev-attach and
usb-detach subcommands. For example:
Create a USB controller which is version USB 1.1 and has 8 ports:
root # xl usbctrl-attach test_vm version=1 ports=8 type=qusb
Find the first available controller:port in the domain, and attach USB
device whose busnum:devnum is 2:3 to it; you can also specify
controller and port:
root # xl usbdev-attach test_vm hostbus=2 hostaddr=3Show all USB controllers and USB devices in the domain:
root # xl usb-list test_vm
Devid Type BE state usb-ver ports
0 qusb 0 1 1 8
Port 1: Bus 002 Device 003
Port 2:
Port 3:
Port 4:
Port 5:
Port 6:
Port 7:
Port 8:Detach the USB device under controller 0 port 1:
root # xl usbdev-detach test_vm 0 1
Remove the USB controller with the indicated dev_id, and
all USB devices under it:
root # xl usbctrl-detach test_vm dev_idFor more information, see https://wiki.xenproject.org/wiki/Xen_USB_Passthrough.
A VM Guest system needs some means to communicate either with other VM Guest systems or with a local network. The network interface to the VM Guest system is made of a split device driver, which means that any virtual Ethernet device has a corresponding network interface in Dom0. This interface is set up to access a virtual network that is run in Dom0. The bridged virtual network is fully integrated into the system configuration of openSUSE Leap and can be configured with YaST.
When installing a Xen VM Host Server, a bridged network configuration will be proposed during normal network configuration. The user can choose to change the configuration during the installation and customize it to the local needs.
If desired, Xen VM Host Server can be installed after performing a
default Physical Server installation using the Install
Hypervisor and Tools module in YaST. This module will
prepare the system for hosting virtual machines, including invocation of
the default bridge networking proposal.
In case the necessary packages for a Xen VM Host Server are installed
manually with rpm or
zypper, the remaining system configuration needs to
be done by the administrator manually or with YaST.
The network scripts that are provided by Xen are not used by default in openSUSE Leap. They are only delivered for reference but disabled. The network configuration that is used in openSUSE Leap is done by means of the YaST system configuration similar to the configuration of network interfaces in openSUSE Leap.
For more general information about managing network bridges, see Section 12.2, “Bridged Networking”.
The Xen hypervisor can provide different types of network interfaces to the VM Guest systems. The preferred network device should be a paravirtualized network interface. This yields the highest transfer rates with the lowest system requirements. Up to eight network interfaces may be provided for each VM Guest.
Systems that are not aware of paravirtualized hardware may not have this option. To connect systems to a network that can only run fully virtualized, several emulated network interfaces are available. The following emulations are at your disposal:
Realtek 8139 (PCI). This is the default emulated network card.
AMD PCnet32 (PCI)
NE2000 (PCI)
NE2000 (ISA)
Intel e100 (PCI)
Intel e1000 and its variants e1000-82540em, e1000-82544gc, e1000-82545em (PCI)
All these network interfaces are software interfaces. Because every network interface must have a unique MAC address, an address range has been assigned to Xensource that can be used by these interfaces.
The default configuration of MAC addresses in virtualized environments creates a random MAC address that looks like 00:16:3E:xx:xx:xx. Normally, the amount of available MAC addresses should be big enough to get only unique addresses. However, if you have a very big installation, or to make sure that no problems arise from random MAC address assignment, you can also manually assign these addresses.
For debugging or system management purposes, it may be useful to know
which virtual interface in Dom0 is connected to which Ethernet
device in a running guest. This information may be read from the device
naming in Dom0. All virtual devices follow the rule
vif<domain
number>.<interface_number>.
For example, if you want to know the device name for the third interface
(eth2) of the VM Guest with id 5, the device in Dom0 would be
vif5.2. To obtain a list of all available interfaces,
run the command ip a.
The device naming does not contain any information about which bridge
this interface is connected to. However, this information is available in
Dom0. To get an overview about which interface is connected to which
bridge, run the command bridge link. The output may
look followingly:
tux >sudobridge link 2: eth0 state DOWN : <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 master br0 3: eth1 state UP : <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 master br1
In this example, there are three configured bridges:
br0, br1 and
br2. Currently, br0 and
br1 each have a real Ethernet device added:
eth0 and eth1, respectively.
Xen can be set up to use host-based routing in the controlling Dom0. Unfortunately, this is not yet well supported from YaST and requires quite an amount of manual editing of configuration files. Thus, this is a task that requires an advanced administrator.
The following configuration will only work when using fixed IP addresses. Using DHCP is not practicable with this procedure, because the IP address must be known to both, the VM Guest and the VM Host Server system.
The easiest way to create a routed guest is to change the networking from a bridged to a routed network. As a requirement to the following procedures, a VM Guest with a bridged network setup must be installed. For example, the VM Host Server is named earth with the IP 192.168.1.20, and the VM Guest has the name alice with the IP 192.168.1.21.
Make sure that alice is shut down. Use
xl commands to shut down and check.
Prepare the network configuration on the VM Host Server earth:
Create a hotplug interface that will be used to route the traffic. To
accomplish this, create a file named
/etc/sysconfig/network/ifcfg-alice.0
with the following content:
NAME="Xen guest alice" BOOTPROTO="static" STARTMODE="hotplug"
Edit the file
/etc/sysconfig/SuSEfirewall2 and add
the following configurations:
Add alice.0 to the devices in FW_DEV_EXT:
FW_DEV_EXT="br0 alice.0"
Switch on the routing in the firewall:
FW_ROUTE="yes"
Tell the firewall which address should be forwarded:
FW_FORWARD="192.168.1.21/32,0/0"
Finally, restart the firewall with the command:
tux >sudosystemctl restart SuSEfirewall2
Add a static route to the interface of alice. To accomplish
this, add the following line to the end of
/etc/sysconfig/network/routes:
192.168.1.21 - - alice.0
To make sure that the switches and routers that the VM Host Server is
connected to know about the routed interface, activate
proxy_arp on earth. Add the following lines
to /etc/sysctl.conf:
net.ipv4.conf.default.proxy_arp = 1 net.ipv4.conf.all.proxy_arp = 1
Activate all changes with the commands:
tux >sudosystemctl restart systemd-sysctl wicked
Proceed with configuring the Xen configuration of the VM Guest by changing the vif interface configuration for alice as described in Section 19.1, “XL—Xen Management Tool”. Make the following changes to the text file you generate during the process:
Remove the snippet
bridge=br0
And add the following one:
vifname=vifalice.0
or
vifname=vifalice.0=emu
for a fully virtualized domain.
Change the script that is used to set up the interface to the following:
script=/etc/xen/scripts/vif-route-ifup
Activate the new configuration and start the VM Guest.
The remaining configuration tasks must be accomplished from inside the VM Guest.
Open a console to the VM Guest with xl console
DOMAIN and log in.
Check that the guest IP is set to 192.168.1.21.
Provide VM Guest with a host route and a default gateway to the
VM Host Server. Do this by adding the following lines to
/etc/sysconfig/network/routes:
192.168.1.20 - - eth0 default 192.168.1.20 - -
Finally, test the network connection from the VM Guest to the world outside and from the network to your VM Guest.
Creating a masqueraded network setup is quite similar to the routed
setup. However, there is no proxy_arp needed, and some firewall rules are
different. To create a masqueraded network to a guest dolly
with the IP address 192.168.100.1 where the host has its external
interface on br0, proceed as follows. For easier
configuration, only the already installed guest is modified to use a
masqueraded network:
Shut down the VM Guest system with xl shutdown
DOMAIN.
Prepare the network configuration on the VM Host Server:
Create a hotplug interface that will be used to route the traffic. To
accomplish this, create a file named
/etc/sysconfig/network/ifcfg-dolly.0
with the following content:
NAME="Xen guest dolly" BOOTPROTO="static" STARTMODE="hotplug"
Edit the file
/etc/sysconfig/SuSEfirewall2 and add
the following configurations:
Add dolly.0 to the devices in FW_DEV_DMZ:
FW_DEV_DMZ="dolly.0"
Switch on the routing in the firewall:
FW_ROUTE="yes"
Switch on masquerading in the firewall:
FW_MASQUERADE="yes"
Tell the firewall which network should be masqueraded:
FW_MASQ_NETS="192.168.100.1/32"
Remove the networks from the masquerading exceptions:
FW_NOMASQ_NETS=""
Finally, restart the firewall with the command:
tux >sudosystemctl restart SuSEfirewall2
Add a static route to the interface of dolly. To
accomplish this, add the following line to the end of
/etc/sysconfig/network/routes:
192.168.100.1 - - dolly.0
Activate all changes with the command:
tux >sudosystemctl restart wicked
Proceed with configuring the Xen configuration of the VM Guest.
Change the vif interface configuration for dolly as described in Section 19.1, “XL—Xen Management Tool”.
Remove the entry:
bridge=br0
And add the following one:
vifname=vifdolly.0
Change the script that is used to set up the interface to the following:
script=/etc/xen/scripts/vif-route-ifup
Activate the new configuration and start the VM Guest.
The remaining configuration tasks need to be accomplished from inside the VM Guest.
Open a console to the VM Guest with xl console
DOMAIN and log in.
Check whether the guest IP is set to 192.168.100.1.
Provide VM Guest with a host route and a default gateway to the
VM Host Server. Do this by adding the following lines to
/etc/sysconfig/network/routes:
192.168.1.20 - - eth0 default 192.168.1.20 - -
Finally, test the network connection from the VM Guest to the outside world.
There are many network configuration possibilities available to Xen. The following configurations are not activated by default:
With Xen, you may limit the network transfer rate a virtual guest may use to access a bridge. To configure this, you need to modify the VM Guest configuration as described in Section 19.1, “XL—Xen Management Tool”.
In the configuration file, first search for the device that is connected to the virtual bridge. The configuration looks like the following:
vif = [ 'mac=00:16:3e:4f:94:a9,bridge=br0' ]
To add a maximum transfer rate, add a parameter
rate to this configuration as in:
vif = [ 'mac=00:16:3e:4f:94:a9,bridge=br0,rate=100Mb/s' ]
Note that the rate is either Mb/s (megabits per
second) or MB/s (megabytes per second). In the above
example, the maximum transfer rate of the virtual interface is 100
megabits. By default, there is no limitation to the bandwidth of a guest
to the virtual bridge.
It is even possible to fine-tune the behavior by specifying the time window that is used to define the granularity of the credit replenishment:
vif = [ 'mac=00:16:3e:4f:94:a9,bridge=br0,rate=100Mb/s@20ms' ]
To monitor the traffic on a specific interface, the little application
iftop is a nice program that displays the
current network traffic in a terminal.
When running a Xen VM Host Server, you need to define the interface
that is monitored. The interface that Dom0 uses to get access to
the physical network is the bridge device, for example
br0. This, however, may vary on your system. To
monitor all traffic to the physical interface, run a terminal as
root and use the command:
iftop -i br0
To monitor the network traffic of a special network interface of a specific VM Guest, supply the correct virtual interface. For example, to monitor the first Ethernet device of the domain with id 5, use the command:
ftop -i vif5.0
To quit iftop, press the key Q. More
options and possibilities are available in the manual page man
8 iftop.
Apart from using the recommended libvirt library
(Part II, “Managing Virtual Machines with libvirt”), you can manage Xen guest
domains with the xl tool from the command line.
The xl program is a tool for managing Xen guest
domains. It is part of the xen-tools package.
xl is based on the LibXenlight library, and can be
used for general domain management, such as domain creation, listing,
pausing, or shutting down. Usually you need to be root to
execute xl commands.
xl can only manage running guest domains specified by
their configuration file. If a guest domain is not running, you cannot
manage it with xl.
To allow users to continue to have managed guest domains in the way the
obsolete xm command allowed, we now recommend using
libvirt's virsh and
virt-manager tools. For more information, see
Part II, “Managing Virtual Machines with libvirt”.
xl operations rely upon
xenstored and
xenconsoled services. Make sure you start
tux > systemctl start xencommons
at boot time to initialize all the daemons required by
xl.
xenbr0 Network Bridge in the Host Domain
In the most common network configuration, you need to set up a bridge in
the host domain named xenbr0 to have a
working network for the guest domains.
The basic structure of every xl command is:
xl <subcommand> [options] domain_id
where <subcommand> is the xl command to run, domain_id is the ID
number assigned to a domain or the name of the virtual machine, and
OPTIONS indicates subcommand-specific options.
For a complete list of the available xl subcommands,
run xl help. For each command, there is a more
detailed help available that is obtained with the extra parameter
--help. More information about the respective
subcommands is available in the manual page of xl.
For example, the xl list --help displays all options
that are available to the list command. As an example, the xl
list command displays the status of all virtual machines.
tux >sudoxl list Name ID Mem VCPUs State Time(s) Domain-0 0 457 2 r----- 2712.9 sles12 7 512 1 -b---- 16.3 opensuse 512 1 12.9
The information indicates if a machine is
running, and in which state it is. The most common flags are
r (running) and b (blocked) where
blocked means it is either waiting for IO, or sleeping because there is
nothing to do. For more details about the state flags, see man 1
xl.
Other useful xl commands include:
xl create creates a virtual machine from a given
configuration file.
xl reboot reboots a virtual machine.
xl destroy immediately terminates a virtual machine.
xl block-list displays all virtual block devices
attached to a virtual machine.
When operating domains, xl requires a domain
configuration file for each domain. The default directory to store such
configuration files is /etc/xen/.
A domain configuration file is a plain text file. It consists of
several
KEY=VALUE
pairs. Some keys are mandatory, some are general and apply to any
guest, and some apply only to a specific guest type (para or fully
virtualized). A value can either be a "string"
surrounded by single or double quotes, a number, a boolean value, or
a list of several values enclosed in brackets [ value1,
value2, ... ].
/etc/xen/sled12.cfg #name= "sled12" builder = "hvm" vncviewer = 1 memory = 512 disk = [ '/var/lib/xen/images/sled12.raw,,hda', '/dev/cdrom,,hdc,cdrom' ] vif = [ 'mac=00:16:3e:5f:48:e4,model=rtl8139,bridge=br0' ] boot = "n"
To start such domain, run xl create
/etc/xen/sled12.cfg.
To make a guest domain start automatically after the host system boots, follow these steps:
Create the domain configuration file if it does not exist, and save it
in the /etc/xen/ directory, for example
/etc/xen/domain_name.cfg.
Make a symbolic link of the guest domain configuration file in the
auto/ subdirectory.
tux >sudoln -s /etc/xen/domain_name.cfg /etc/xen/auto/domain_name.cfg
On the next system boot, the guest domain defined in
domain_name.cfg will be started.
In the guest domain configuration file, you can define actions to be performed on a predefined set of events. For example, to tell the domain to restart itself after it is powered off, include the following line in its configuration file:
on_poweroff="restart"
A list of predefined events for a guest domain follows:
Specifies what should be done with the domain if it shuts itself down.
Action to take if the domain shuts down with a reason code requesting a reboot.
Action to take if the domain shuts down because of a Xen watchdog timeout.
Action to take if the domain crashes.
For these events, you can define one of the following actions:
Destroy the domain.
Destroy the domain and immediately create a new domain with the same configuration.
Rename the domain that terminated, and then immediately create a new domain with the same configuration as the original.
Keep the domain. It can be examined, and later destroyed with
xl destroy.
Write a core dump of the domain to
/var/xen/dump/NAME and then destroy the domain.
Write a core dump of the domain to
/var/xen/dump/NAME and then restart the domain.
The Time Stamp Counter (TSC) may be specified for each domain in the guest domain configuration file (for more information, see Section 19.1.1, “Guest Domain Configuration File”).
With the tsc_mode setting, you specify whether
rdtsc instructions are executed “natively” (fast, but
TSC-sensitive applications may sometimes run incorrectly) or emulated
(always run correctly, but performance may suffer).
tsc_mode=0 (default)Use this to ensure correctness while providing the best performance possible—for more information, see https://xenbits.xen.org/docs/4.3-testing/misc/tscmode.txt.
tsc_mode=1 (always emulate)Use this when TSC-sensitive apps are running and worst-case performance degradation is known and acceptable.
tsc_mode=2 (never emulate)Use this when all applications running in this VM are TSC-resilient and highest performance is required.
tsc_mode=3 (PVRDTSCP)High-TSC-frequency applications may be paravirtualized (modified) to obtain both correctness and highest performance—any unmodified applications must be TSC-resilient.
For background information, see https://xenbits.xen.org/docs/4.3-testing/misc/tscmode.txt.
Make sure the virtual machine to be saved is running.
In the host environment, enter
tux >sudoxl save ID STATE-FILE
where ID is the virtual machine ID you want
to save, and STATE-FILE is the name you
specify for the memory state file. By default, the domain will no
longer be running after you create its snapshot. Use
-c to keep it running even after you create the
snapshot.
Make sure the virtual machine to be restored has not been started since you ran the save operation.
In the host environment, enter
tux >sudoxl restore STATE-FILE
where STATE-FILE is the previously saved
memory state file. By default, the domain will be running after it is
restored. To pause it after the restore, use -p.
A virtual machine’s state can be displayed by viewing the results of
the xl list command, which abbreviates the state using
a single character.
r - running - The virtual machine is currently
running and consuming allocated resources.
b - blocked - The virtual machine’s processor is
not running and not able to run. It is either waiting for I/O or has
stopped working.
p - paused - The virtual machine is paused. It does
not interact with the hypervisor but still maintains its allocated
resources, such as memory.
s - shutdown - The guest operating system is in the
process of being shut down, rebooted, or suspended, and the virtual
machine is being stopped.
c - crashed - The virtual machine has crashed and is
not running.
d - dying - The virtual machine is in the process of
shutting down or crashing.
The disk(s) specification for a Xen domain in the domain configuration file is as straightforward as the following example:
disk = [ 'format=raw,vdev=hdc,access=ro,devtype=cdrom,target=/root/image.iso' ]
It defines a disk block device based on the
/root/image.iso disk image file. The disk will be seen
as hdc by the guest, with read-only
(ro) access. The type of the device is
cdrom with raw format.
The following example defines an identical device, but using simplified positional syntax:
disk = [ '/root/image.iso,raw,hdc,ro,cdrom' ]
You can include more disk definitions in the same line, each one separated by a comma. If a parameter is not specified, then its default value is taken:
disk = [ '/root/image.iso,raw,hdc,ro,cdrom','/dev/vg/guest-volume,,hda','...' ]
Source block device or disk image file path.
The format of the image file. Default is raw.
Virtual device as seen by the guest. Supported values are hd[x], xvd[x],
sd[x] etc. See
/usr/share/doc/packages/xen/misc/vbd-interface.txt
for more details. This parameter is mandatory.
Whether the block device is provided to the guest in read-only or
read-write mode. Supported values are ro or
r for read-only, and rw or
w for read/write access. Default is
ro for devtype=cdrom, and
rw for other device types.
Qualifies virtual device type. Supported value is
cdrom.
The back-end implementation to use. Supported values are
phy, tap, and
qdisk. Normally this option should not be specified as
the back-end type is automatically determined.
Specifies that target is not a normal host path, but
rather information to be interpreted by the executable program. The
specified script file is looked for in
/etc/xen/scripts if it does not point to an absolute
path. These scripts are normally called
block-<script_name>.
For more information about specifying virtual disks, see
/usr/share/doc/packages/xen/misc/xl-disk-configuration.txt.
Similar to mapping a local disk image (see Section 20.1, “Mapping Physical Storage to Virtual Disks”), you can map a network disk as a virtual disk as well.
The following example shows mapping of an RBD (RADOS Block Device) disk with multiple Ceph monitors and cephx authentication enabled:
disk = [ 'vdev=hdc, backendtype=qdisk, \ target=rbd:libvirt-pool/new-libvirt-image:\ id=libvirt:key=AQDsPWtW8JoXJBAAyLPQe7MhCC+JPkI3QuhaAw==:auth_supported=cephx;none:\ mon_host=137.65.135.205\\:6789;137.65.135.206\\:6789;137.65.135.207\\:6789' ]
Following is an example of an NBD (Network Block Device) disk mapping:
disk = [ 'vdev=hdc, backendtype=qdisk, target=nbd:151.155.144.82:5555' ]
When a virtual machine is running, each of its file-backed virtual disks consumes a loopback device on the host. By default, the host allows up to 64 loopback devices to be consumed.
To simultaneously run more file-backed virtual disks on a host, you can
increase the number of available loopback devices by adding the following
option to the host’s /etc/modprobe.conf.local file.
options loop max_loop=x
where x is the maximum number of loopback devices to
create.
Changes take effect after the module is reloaded.
Enter rmmod loop and modprobe loop to
unload and reload the module. In case rmmod does not
work, unmount all existing loop devices or reboot the computer.
While it is always possible to add new block devices to a VM Guest system, it is sometimes more desirable to increase the size of an existing block device. In case such a system modification is already planned during deployment of the VM Guest, some basic considerations should be done:
Use a block device that may be increased in size. LVM devices and file system images are commonly used.
Do not partition the device inside the VM Guest, but use the main device
directly to apply the file system. For example, use
/dev/xvdb directly instead of adding partitions to
/dev/xvdb.
Make sure that the file system to be used can be resized. Sometimes, for
example with Ext3, some features must be switched off to be able to resize
the file system. A file system that can be resized online and mounted is
XFS. Use the command xfs_growfs to
resize that file system after the underlying block device has been
increased in size. For more information about XFS, see
man 8 xfs_growfs.
When resizing an LVM device that is assigned to a VM Guest, the new size is automatically known to the VM Guest. No further action is needed to inform the VM Guest about the new size of the block device.
When using file system images, a loop device is used to attach the image file to the guest. For more information about resizing that image and refreshing the size information for the VM Guest, see Section 22.2, “Sparse Image Files and Disk Space”.
There are scripts that can help with managing advanced storage scenarios
such as disk environments provided by
dmmd (“device mapper—multi
disk”) including LVM environments built upon a software RAID set, or
a software RAID set built upon an LVM environment. These scripts are part of
the xen-tools package. After installation, they can be
found in /etc/xen/scripts:
block-dmmd
block-drbd-probe
block-npiv
The scripts allow for external commands to perform some action, or series of actions of the block devices prior to serving them up to a guest.
These scripts could formerly only be used with xl
or libxl using the disk configuration syntax
script=. They can now be used with libvirt by
specifying the base name of the block script in the
<source> element of the disk. For example:
<source dev='dmmd:md;/dev/md0;lvm;/dev/vgxen/lv-vm01'/>
The documentation in this section, describes advanced management tasks and configuration options that might help technology innovators implement leading-edge virtualization solutions. It is provided as a courtesy and does not imply that all documented options and tasks are supported by Novell, Inc.
Virtual CD readers can be set up when a virtual machine is created or added to an existing virtual machine. A virtual CD reader can be based on a physical CD/DVD, or based on an ISO image. Virtual CD readers work differently depending on whether they are paravirtual or fully virtual.
A paravirtual machine can have up to 100 block devices composed of virtual CD readers and virtual disks. On paravirtual machines, virtual CD readers present the CD as a virtual disk with read-only access. Virtual CD readers cannot be used to write data to a CD.
After you have finished accessing a CD on a paravirtual machine, it is recommended that you remove the virtual CD reader from the virtual machine.
Paravirtualized guests can use the device type
devtype=cdrom. This partly emulates the behavior of a
real CD reader, and allows CDs to be changed. It is even possible to use
the eject command to open the tray of the CD reader.
A fully virtual machine can have up to four block devices composed of virtual CD readers and virtual disks. A virtual CD reader on a fully virtual machine interacts with an inserted CD in the way you would expect a physical CD reader to interact.
When a CD is inserted in the physical CD reader on the host computer,
all virtual machines with virtual CD readers based on the physical CD
reader, such as /dev/cdrom/, can read the
inserted CD. Assuming the operating system has automount functionality,
the CD should automatically appear in the file system. Virtual CD
readers cannot be used to write data to a CD. They are configured as
read-only devices.
Virtual CD readers can be based on a CD inserted into the CD reader or on an ISO image file.
Make sure that the virtual machine is running and the operating system has finished booting.
Insert the desired CD into the physical CD reader or copy the desired ISO image to a location available to Dom0.
Select a new, unused block device in your VM Guest, such as
/dev/xvdb.
Choose the CD reader or ISO image that you want to assign to the guest.
When using a real CD reader, use the following command to assign the CD reader to your VM Guest. In this example, the name of the guest is alice:
tux >sudoxl block-attach alice target=/dev/sr0,vdev=xvdb,access=ro
When assigning an image file, use the following command:
tux >sudoxl block-attach alice target=/path/to/file.iso,vdev=xvdb,access=ro
A new block device, such as /dev/xvdb, is added
to the virtual machine.
If the virtual machine is running Linux, complete the following:
Open a terminal in the virtual machine and enter fdisk
-l to verify that the device was properly added. You can
also enter ls /sys/block to see all disks
available to the virtual machine.
The CD is recognized by the virtual machine as a virtual disk with a drive designation, for example:
/dev/xvdb
Enter the command to mount the CD or ISO image using its drive designation. For example,
tux >sudomount -o ro /dev/xvdb /mnt
mounts the CD to a mount point named /mnt.
The CD or ISO image file should be available to the virtual machine at the specified mount point.
If the virtual machine is running Windows, reboot the virtual machine.
Verify that the virtual CD reader appears in its My
Computer section.
Make sure that the virtual machine is running and the operating system has finished booting.
If the virtual CD reader is mounted, unmount it from within the virtual machine.
Enter xl block-list alice on the host view of the
guest block devices.
Enter xl block-detach alice
BLOCK_DEV_ID to remove the virtual device
from the guest. If that fails, try to add -f to force
the removal.
Press the hardware eject button to eject the CD.
Some configurations, such as those that include rack-mounted servers,
require a computer to run without a video monitor, keyboard, or mouse.
This type of configuration is often called headless and
requires the use of remote administration technologies.
Typical configuration scenarios and technologies include:
If a graphical desktop, such as GNOME, is installed on the virtual
machine host, you can use a remote viewer, such as a VNC viewer. On a
remote computer, log in and manage the remote guest environment by
using graphical tools, such as tigervnc or
virt-viewer.
You can use the ssh command from a remote computer
to log in to a virtual machine host and access its text-based console.
You can then use the xl command to manage virtual
machines, and the virt-install command to create new virtual machines.
VNC viewer is used to view the environment of the running guest system in a graphical way. You can use it from Dom0 (known as local access or on-box access), or from a remote computer.
You can use the IP address of a VM Host Server and a VNC viewer to view the display of this VM Guest. When a virtual machine is running, the VNC server on the host assigns the virtual machine a port number to be used for VNC viewer connections. The assigned port number is the lowest port number available when the virtual machine starts. The number is only available for the virtual machine while it is running. After shutting down, the port number might be assigned to other virtual machines.
For example, if ports 1 and 2 and 4 and 5 are assigned to the running virtual machines, the VNC viewer assigns the lowest available port number, 3. If port number 3 is still in use the next time the virtual machine starts, the VNC server assigns a different port number to the virtual machine.
To use the VNC viewer from a remote computer, the firewall must permit access to as many ports as VM Guest systems run from. This means from port 5900 and up. For example, to run 10 VM Guest systems, you need to open the TCP ports 5900:5910.
To access the virtual machine from the local console running a VNC viewer client, enter one of the following commands:
vncviewer ::590#
vncviewer :#
# is the VNC viewer port number assigned to the virtual machine.
When accessing the VM Guest from a machine other than Dom0, use the following syntax:
tux > vncviewer 192.168.1.20::590#In this case, the IP address of Dom0 is 192.168.1.20.
Although the default behavior of VNC viewer is to assign the first available port number, you should assign a specific VNC viewer port number to a specific virtual machine.
To assign a specific port number on a VM Guest, edit the xl setting
of the virtual machine and change the vnclisten to
the desired value. Note that for example for port number 5902, specify 2
only, as 5900 is added automatically:
vfb = [ 'vnc=1,vnclisten="localhost:2"' ]
For more information regarding editing the xl settings of a guest domain, see Section 19.1, “XL—Xen Management Tool”.
Assign higher port numbers to avoid conflict with port numbers assigned by the VNC viewer, which uses the lowest available port number.
If you access a virtual machine's display from the virtual machine host console (known as local or on-box access), you should use SDL instead of VNC viewer. VNC viewer is faster for viewing desktops over a network, but SDL is faster for viewing desktops from the same computer.
To set the default to use SDL instead of VNC, change the virtual machine's configuration information to the following. For instructions, see Section 19.1, “XL—Xen Management Tool”.
vfb = [ 'sdl=1' ]
Remember that, unlike a VNC viewer window, closing an SDL window terminates the virtual machine.
When a virtual machine is started, the host creates a virtual keyboard
that matches the keymap entry according to the virtual
machine's settings. If there is no keymap entry
specified, the virtual machine's keyboard defaults to English (US).
To view a virtual machine's current keymap entry,
enter the following command on the Dom0:
tux > xl list -l VM_NAME | grep keymapTo configure a virtual keyboard for a guest, use the following snippet:
vfb = [ 'keymap="de"' ]
For a complete list of supported keyboard layouts, see the
Keymaps section of the xl.cfg
manual page man 5 xl.cfg.
In Xen it is possible to specify how many and which CPU cores the Dom0 or VM Guest should use to retain its performance. The performance of Dom0 is important for the overall system, as the disk and network drivers are running on it. Also I/O intensive guests' workloads may consume lots of Dom0s' CPU cycles. On the other hand, the performance of VM Guests is also important, to be able to accomplish the task they were set up for.
Dedicating CPU resources to Dom0 results in a better overall performance of the virtualized environment because Dom0 has free CPU time to process I/O requests from VM Guests. Failing to dedicate exclusive CPU resources to Dom0 usually results in a poor performance and can cause the VM Guests to function incorrectly.
Dedicating CPU resources involves three basic steps: modifying Xen boot line, binding Dom0's VCPUs to a physical processor, and configuring CPU-related options on VM Guests:
First you need to append the dom0_max_vcpus=X to
the Xen boot line. Do so by adding the following line to
/etc/default/grub:
GRUB_CMDLINE_XEN="dom0_max_vcpus=X"
If /etc/default/grub already contains a line
setting GRUB_CMDLINE_XEN, rather append
dom0_max_vcpus=X to this line.
X needs to be replaced by the number of VCPUs
dedicated to Dom0.
Update the GRUB 2 configuration file by running the following command:
tux >sudogrub2-mkconfig -o /boot/grub2/grub.cfg
Reboot for the change to take effect.
The next step is to bind (or “pin”) each Dom0's VCPU to a physical processor.
tux >sudoxl vcpu-pin Domain-0 0 0 xl vcpu-pin Domain-0 1 1
The first line binds Dom0's VCPU number 0 to the physical processor number 0, while the second line binds Dom0's VCPU number 1 to the physical processor number 1.
Lastly, you need to make sure no VM Guest uses the physical processors dedicated to VCPUs of Dom0. Assuming you are running an 8-CPU system, you need to add
cpus="2-8"
to the configuration file of the relevant VM Guest.
It is often necessary to dedicate specific CPU resources to a virtual machine. By default, a virtual machine uses any available CPU core. Its performance can be improved by assigning a reasonable number of physical processors to it as other VM Guests are not allowed to use them after that. Assuming a machine with 8 CPU cores while a virtual machine needs to use 2 of them, change its configuration file as follows:
vcpus=2 cpus="2,3"
The above example dedicates 2 processors to the
VM Guest, and these being the 3rd and 4th one, (2
and 3 counted from zero). If you need to assign more
physical processors, use the cpus="2-8" syntax.
If you need to change the CPU assignment for a guest named “alice” in a hotplug manner, do the following on the related Dom0:
tux >sudoxl vcpu-set alice 2tux >sudoxl vcpu-pin alice 0 2tux >sudoxl vcpu-pin alice 1 3
The example will dedicate 2 physical processors to the guest, and bind its VCPU 0 to physical processor 2 and VCPU 1 to physical processor 3. Now check the assignment:
tux >sudoxl vcpu-list alice Name ID VCPUs CPU State Time(s) CPU Affinity alice 4 0 2 -b- 1.9 2-3 alice 4 1 3 -b- 2.8 2-3
In Xen some features are only available for fully virtualized domains. They are not very often used, but still may be interesting in some environments.
Just as with physical hardware, it is sometimes desirable to boot a
VM Guest from a different device than its own boot device. For fully
virtual machines, it is possible to select a boot device with the
boot parameter in a domain xl configuration file:
boot = BOOT_DEVICE
BOOT_DEVICE can be one of
c for hard disk, d for CD-ROM, or
n for Network/PXE. You can specify multiple options,
and they will be attempted in the given order. For example,
boot = dc
boots from CD-ROM, and falls back to the hard disk if CD-ROM is not bootable.
To be able to migrate a VM Guest from one VM Host Server to a different
VM Host Server, the VM Guest system may only use CPU
features that are available on both VM Host Server systems. If the actual CPUs
are different on both hosts, it may be necessary to hide some features
before the VM Guest is started. This maintains the possibility to
migrate the VM Guest between both hosts. For fully virtualized guests,
this can be achieved by configuring the cpuid that is
available to the guest.
To gain an overview of the current CPU, have a look at
/proc/cpuinfo. This contains all the important
information that defines the current CPU.
To redefine a CPU, first have a look at the respective cpuid definitions of the CPU vendor. These are available from:
cpuid = "host,tm=0,sse3=0"
The syntax is a comma-separated list of key=value pairs, preceded by the
word "host". A few keys take a numerical value, while all others take a
single character which describes what to do with the feature bit. See
man 5 xl.cfg for a complete list of cpuid keys. The
respective bits may be changed by using the following values:
Force the corresponding bit to 1
Force the corresponding bit to 0
Use the values of the default policy
Use the values defined by the host
Like k, but preserve the value over migrations
Note that counting bits is done from right to left, starting with bit
0.
In case you need to increase the default number of PCI-IRQs available to
Dom0 and/or VM Guest, you can do so by modifying the Xen
kernel command line. Use the command
extra_guest_irqs=
DOMU_IRGS,DOM0_IRGS. The optional first
number DOMU_IRGS is common for all
VM Guests, while the optional second number
DOM0_IRGS (preceded by a comma) is for
Dom0. Changing the setting for VM Guest has no impact on
Dom0 and vice versa. For example to change Dom0 without
changing VM Guest, use
extra_guest_irqs=,512
The boot loader controls how the virtualization software boots and runs. You can modify the boot loader properties by using YaST, or by directly editing the boot loader configuration file.
The YaST boot loader program is located at › › . Click the tab and select the line containing the Xen kernel as the .
Confirm with . Next time you boot the host, it will be ready to provide the Xen virtualization environment.
You can use the Boot Loader program to specify functionality, such as:
Pass kernel command line parameters.
Specify the kernel image and initial RAM disk.
Select a specific hypervisor.
Pass additional parameters to the hypervisor. See http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html for their complete list.
You can customize your virtualization environment by editing the
/etc/default/grub file. Add the following line to
this file:
GRUB_CMDLINE_XEN="<boot_parameters>". Do not
forget to run grub2-mkconfig -o /boot/grub2/grub.cfg
after editing the file.
If the host’s physical disk reaches a state where it has no available space, a virtual machine using a virtual disk based on a sparse image file cannot write to its disk. Consequently, it reports I/O errors.
If this situation occurs, you should free up available space on the physical disk, remount the virtual machine’s file system, and set the file system back to read-write.
To check the actual disk requirements of a sparse image file, use the
command du -h <image file>.
To increase the available space of a sparse image file, first increase the file size and then the file system.
Touching the sizes of partitions or sparse files always bears the risk of data failure. Do not work without a backup.
The resizing of the image file can be done online, while the VM Guest is running. Increase the size of a sparse image file with:
tux >sudodd if=/dev/zero of=<image file> count=0 bs=1M seek=<new size in MB>
For example, to increase the file
/var/lib/xen/images/sles/disk0 to a size of 16GB,
use the command:
tux >sudodd if=/dev/zero of=/var/lib/xen/images/sles/disk0 count=0 bs=1M seek=16000
It is also possible to increase the image files of devices that are not sparse files. However, you must know exactly where the previous image ends. Use the seek parameter to point to the end of the image file and use a command similar to the following:
tux >sudodd if=/dev/zero of=/var/lib/xen/images/sles/disk0 seek=8000 bs=1M count=2000
Be sure to use the right seek, else data loss may happen.
If the VM Guest is running during the resize operation, also resize the loop device that provides the image file to the VM Guest. First detect the correct loop device with the command:
tux >sudolosetup -j /var/lib/xen/images/sles/disk0
Then resize the loop device, for example /dev/loop0,
with the following command:
tux >sudolosetup -c /dev/loop0
Finally check the size of the block device inside the guest system with
the command fdisk -l /dev/xvdb. The device name
depends on the actually increased device.
The resizing of the file system inside the sparse file involves tools that are depending on the actual file system.
With Xen it is possible to migrate a VM Guest system from one VM Host Server to another with almost no service interruption. This could be used for example to move a busy VM Guest to a VM Host Server that has stronger hardware or is not yet loaded. Or, if a service of a VM Host Server is required, all VM Guest systems running on this machine can be migrated to other machines to avoid interruption of service. These are only two examples—many more reasons may apply to your personal situation.
Before starting, some preliminary considerations regarding the VM Host Server should be taken into account:
All VM Host Server systems should use a similar CPU. The frequency is not
so important, but they should be using the same CPU family. To get more
information about the used CPU, see cat
/proc/cpuinfo.
All resources that are used by a specific guest system must be available on all involved VM Host Server systems—for example all used block devices must exist on both VM Host Server systems.
If the hosts included in the migration process run in different subnets, make sure that either DHCP relay is available to the guests, or for guests with static network configuration, set up the network manually.
Using special features like PCI Pass-Through may be
problematic. Do not implement these when deploying for an environment
that should migrate VM Guest systems between different VM Host Server
systems.
For fast migrations, a fast network is mandatory. If possible, use GB Ethernet and fast switches. Deploying VLAN might also help avoid collisions.
The block devices needed by the VM Guest system must be available on all involved VM Host Server systems. This is done by implementing some kind of shared storage that serves as container for the root file system of the migrated VM Guest system. Common possibilities include:
iSCSI can be set up to give access to the same block
devices from different systems at the same time.
NFS is a widely used root file system that can
easily be accessed from different locations. For more information, see
Chapter 22, Sharing File Systems with NFS.
DRBD can be used if only two VM Host Server systems
are involved. This gives some extra data security, because the used
data is mirrored over the network. .
SCSI can also be used if the available hardware
permits shared access to the same disks.
NPIV is a special mode to use Fibre channel disks.
However, in this case all migration hosts must be attached to the same
Fibre channel switch. For more information about NPIV, see
Section 20.1, “Mapping Physical Storage to Virtual Disks”. Commonly, this works if the
Fibre channel environment supports 4 Gbit or faster connections.
The actual migration of the VM Guest system is done with the command:
tux >sudoxl migrate <domain_name> <host>
The speed of the migration depends on how fast the memory print can be saved to disk, sent to the new VM Host Server and loaded there. This means that small VM Guest systems can be migrated faster than big systems with a lot of memory.
For a regular operation of many virtual guests, having a possibility to check the sanity of all the different VM Guest systems is indispensable. Xen offers several tools besides the system tools to gather information about the system.
Basic monitoring of the VM Host Server (I/O and CPU) is available via the Virtual Machine Manager. Refer to Section 9.8.1, “Monitoring with Virtual Machine Manager” for details.
xentop #
The preferred terminal application to gather information about Xen
virtual environment is xentop. Unfortunately, this
tool needs a rather broad terminal, else it inserts line breaks into the
display.
xentop has several command keys that can give you
more information about the system that is monitored. Some of the more
important are:
Change the delay between the refreshes of the screen.
Also display network statistics. Note, that only standard configurations will be displayed. If you use a special configuration like a routed network, no network will be displayed.
Display the respective block devices and their cumulated usage count.
For more information about xentop see the manual page
man 1 xentop.
virt-top
libvirt offers the hypervisor-agnostic tool virt-top,
which is recommended for monitoring VM Guests. See Section 9.8.2, “Monitoring with virt-top” for details.
There are many system tools that also help monitoring or debugging a running openSUSE system. Many of these are covered in Chapter 2, System Monitoring Utilities. Especially useful for monitoring a virtualization environment are the following tools:
The command line utility ip may be used to monitor
arbitrary network interfaces. This is especially useful if you have
set up a network that is routed or applied a masqueraded network. To
monitor a network interface with the name
alice.0, run the following command:
tux > watch ip -s link show alice.0
In a standard setup, all the Xen VM Guest systems are
attached to a virtual network bridge. bridge allows
you to determine the connection between the bridge and the virtual
network adapter in the VM Guest system. For example, the output
of bridge link may look like the following:
2: eth0 state DOWN : <NO-CARRIER, ...,UP> mtu 1500 master br0 8: vnet0 state UNKNOWN : <BROADCAST, ...,LOWER_UP> mtu 1500 master virbr0 \ state forwarding priority 32 cost 100
This shows that there are two virtual bridges defined on the system. One
is connected to the physical Ethernet device eth0, the
other one is connected to a VLAN interface vnet0.
Especially when using masquerade networks, or if several Ethernet interfaces are set up together with a firewall setup, it may be helpful to check the current firewall rules.
The command iptables may be used to check all the
different firewall settings. To list all the rules of a chain, or
even of the complete setup, you may use the commands
iptables-save or iptables -S.
In a standard Xen environment, the VM Guest systems have only
very limited information about the VM Host Server system they are running
on. If a guest should know more about the VM Host Server it runs on,
vhostmd can provide more information to selected
guests. To set up your system to run vhostmd,
proceed as follows:
Install the package vhostmd on the VM Host Server.
To add or remove metric sections from the
configuration, edit the file
/etc/vhostmd/vhostmd.conf. However, the default works
well.
Check the validity of the vhostmd.conf
configuration file with the command:
tux >cd /etc/vhostmdtux >xmllint --postvalid --noout vhostmd.conf
Start the vhostmd daemon with the command sudo systemctl start
vhostmd.
If vhostmd should be started automatically during start-up of the system, run the command:
tux >sudosystemctl enable vhostmd
Attach the image file /dev/shm/vhostmd0 to the
VM Guest system named alice with the command:
tux > xl block-attach opensuse /dev/shm/vhostmd0,,xvdb,roLog on the VM Guest system.
Install the client package vm-dump-metrics.
Run the command vm-dump-metrics. To save the result to
a file, use the option -d <filename>.
The result of the vm-dump-metrics is an XML
output. The respective metric entries follow the DTD
/etc/vhostmd/metric.dtd.
For more information, see the manual pages man 8
vhostmd and /usr/share/doc/vhostmd/README
on the VM Host Server system. On the guest, see the manual page man
1 vm-dump-metrics.
This section introduces basic information about XenStore, its role in the Xen environment, the directory structure of files used by XenStore, and the description of XenStore's commands.
XenStore is a database of configuration and status information
shared between VM Guests and the management tools running in
Dom0. VM Guests and the management tools read and write to
XenStore to convey configuration information, status updates, and
state changes. The XenStore database is managed by Dom0 and
supports simple operations such as reading and writing a key.
VM Guests and management tools can be notified of any changes in
XenStore by watching entries of interest. Note that the
xenstored daemon is managed by the
xencommons service.
XenStore is located on Dom0 in a single database file
/var/lib/xenstored/tdb (tdb
represents tree database).
XenStore database content is represented by a virtual file system
similar to /proc (for more information on
/proc, see Section 2.6, “The /proc File System”). The
tree has three main paths: /vm,
/local/domain, and /tool.
/vm - stores information about the VM Guest
configuration.
/local/domain - stores information about
VM Guest on the local node.
/tool - stores general information about various
tools.
Each VM Guest has two different ID numbers. The universal unique identifier (UUID) remains the same even if the VM Guest is migrated to another machine. The domain identifier (DOMID) is an identification number that represents a particular running instance. It typically changes when the VM Guest is migrated to another machine.
The file system structure of the XenStore database can be operated with the following commands:
xenstore-ls
Displays the full dump of the XenStore database.
xenstore-readpath_to_xenstore_entry
Displays the value of the specified XenStore entry.
xenstore-existsxenstore_path
Reports whether the specified XenStore path exists.
xenstore-listxenstore_path
Displays all the children entries of the specified XenStore path.
xenstore-writepath_to_xenstore_entry
Updates the value of the specified XenStore entry.
xenstore-rmxenstore_path
Removes the specified XenStore entry or directory.
xenstore-chmodxenstore_pathmode
Updates the read/write permission on the specified XenStore path.
xenstore-control
Sends a command to the xenstored back-end,
such as triggering an integrity check.
/vm #
The /vm path is indexed by the UUID of each
VM Guest, and stores configuration information such as the number of
virtual CPUs and the amount of allocated memory. There is a
/vm/<uuid> directory for each
VM Guest. To list the directory content, use
xenstore-list.
tux >sudoxenstore-list /vm 00000000-0000-0000-0000-000000000000 9b30841b-43bc-2af9-2ed3-5a649f466d79-1
The first line of the output belongs to Dom0, and the second one to a running VM Guest. The following command lists all the entries related to the VM Guest:
tux >sudoxenstore-list /vm/9b30841b-43bc-2af9-2ed3-5a649f466d79-1 image rtc device pool_name shadow_memory uuid on_reboot start_time on_poweroff bootloader_args on_crash vcpus vcpu_avail bootloader name
To read a value of an entry, for example the number of virtual CPUs
dedicated to the VM Guest, use xenstore-read:
tux >sudoxenstore-read /vm/9b30841b-43bc-2af9-2ed3-5a649f466d79-1/vcpus 1
A list of selected /vm/<uuid> entries
follows:
uuid
UUID of the VM Guest. It does not change during the migration process.
on_reboot
Specifies whether to destroy or restart the VM Guest in response to a reboot request.
on_poweroff
Specifies whether to destroy or restart the VM Guest in response to a halt request.
on_crash
Specifies whether to destroy or restart the VM Guest in response to a crash.
vcpus
Number of virtual CPUs allocated to the VM Guest.
vcpu_avail
Bitmask of active virtual CPUs for the VM Guest. The bitmask has
several bits equal to the value of vcpus, with
a bit set for each online virtual CPU.
name
The name of the VM Guest.
Regular VM Guests (not Dom0) use the
/vm/<uuid>/image path:
tux >sudoxenstore-list /vm/9b30841b-43bc-2af9-2ed3-5a649f466d79-1/image ostype kernel cmdline ramdisk dmargs device-model display
An explanation of the used entries follows:
ostype
The OS type of the VM Guest.
kernel
The path on Dom0 to the kernel for the VM Guest.
cmdline
The kernel command line for the VM Guest used when booting.
ramdisk
The path on Dom0 to the RAM disk for the VM Guest.
dmargs
Shows arguments passed to the QEMU process. If you look at the
QEMU process with ps, you should see the same
arguments as in
/vm/<uuid>/image/dmargs.
/local/domain/<domid> #This path is indexed by the running domain (VM Guest) ID, and contains information about the running VM Guest. Remember that the domain ID changes during VM Guest migration. The following entries are available:
vm
The path of the /vm directory for this
VM Guest.
on_reboot, on_poweroff, on_crash, name
See identical options in Section 23.2.2, “/vm”
domid
Domain identifier for the VM Guest.
cpu
The current CPU to which the VM Guest is pinned.
cpu_weight
The weight assigned to the VM Guest for scheduling purposes. Higher weights use the physical CPUs more often.
Apart from the individual entries described above, there are also
several subdirectories under
/local/domain/<domid>, containing specific
entries. To see all entries available, refer to
XenStore
Reference.
/local/domain/<domid>/memory
Contains memory information.
/local/domain/<domid>/memory/target
contains target memory size for the VM Guest (in kilobytes).
/local/domain/<domid>/console
Contains information about a console used by the VM Guest.
/local/domain/<domid>/backend
Contains information about all back-end devices used by the VM Guest. The path has subdirectories of its own.
/local/domain/<domid>/device
Contains information about the front-end devices for the VM Guest.
/local/domain/<domid>/device-misc
Contains miscellaneous information about devices.
/local/domain/<domid>/store
Contains information about the VM Guest's store.
Setting up two Xen hosts as a failover system has several advantages compared to a setup where every server runs on dedicated hardware.
Failure of a single server does not cause major interruption of the service.
A single big machine is normally way cheaper than multiple smaller machines.
Adding new servers as needed is a trivial task.
The usage of the server is improved, which has positive effects on the power consumption of the system.
The setup of migration for Xen hosts is described in Section 22.3, “Migrating Xen VM Guest Systems”. In the following, several typical scenarios are described.
Xen can directly provide several remote block devices to the respective Xen guest systems. These include iSCSI, NPIV, and NBD. All of these may be used to do live migrations. When a storage system is already in place, first try to use the same device type you already used in the network.
If the storage system cannot be used directly but provides a possibility to offer the needed space over NFS, it is also possible to create image files on NFS. If the NFS file system is available on all Xen host systems, this method also allows live migrations of Xen guests.
When setting up a new system, one of the main considerations is whether a dedicated storage area network should be implemented. The following possibilities are available:
|
Method |
Complexity |
Comments |
|---|---|---|
|
Ethernet |
low |
Note that all block device traffic goes over the same Ethernet interface as the network traffic. This may be limiting the performance of the guest. |
|
Ethernet dedicated to storage. |
medium |
Running the storage traffic over a dedicated Ethernet interface may eliminate a bottleneck on the server side. However, planning your own network with your own IP address range and possibly a VLAN dedicated to storage requires numerous considerations. |
|
NPIV |
high |
NPIV is a method to virtualize Fibre channel connections. This is available with adapters that support a data rate of at least 4 Gbit/s and allows the setup of complex storage systems. |
Typically, a 1 Gbit/s Ethernet device can fully use a typical hard disk or storage system. When using very fast storage systems, such an Ethernet device will probably limit the speed of the system.
For space or budget reasons, it may be necessary to rely on storage that is local to the Xen host systems. To still maintain the possibility of live migrations, it is necessary to build block devices that are mirrored to both Xen hosts. The software that allows this is called Distributed Replicated Block Device (DRBD).
If a system that uses DRBD to mirror the block devices or files between two Xen hosts should be set up, both hosts should use the identical hardware. If one of the hosts has slower hard disks, both hosts will suffer from this limitation.
During the setup, each of the required block devices should use its own DRBD device. The setup of such a system is quite a complex task.
When using several guest systems that need to communicate between each other, it is possible to do this over the regular interface. However, for security reasons it may be advisable to create a bridge that is only connected to guest systems.
In an HA environment that also should support live migrations, such a private bridge must be connected to the other Xen hosts. This is possible by using dedicated physical Ethernet devices and a dedicated network.
A different implementation method is using VLAN interfaces. In that case, all the traffic goes over the regular Ethernet interface. However, the VLAN interface does not get the regular traffic, because only the VLAN packets that are tagged for the correct VLAN are forwarded.
For more information about the setup of a VLAN interface see Section 12.2.3, “Using VLAN Interfaces”.
QEMU is a fast, cross-platform open source machine emulator which can emulate a huge number of hardware architectures for you. QEMU lets you run a complete unmodified operating system (VM Guest) on top of your existing system (VM Host Server).
This section documents how to set up and use openSUSE Leap 42.3 as a QEMU-KVM based virtual machine host.
The libvirt-based tools such as
virt-manager and virt-install
offer convenient interfaces to set up and
manage virtual machines. They act as a kind of wrapper for the
qemu-system-ARCHcommand. However, it is also possible to
use qemu-system-ARCH directly without using
libvirt-based tools.
Once you have a virtual disk image ready (for more information on disk images, see Section 27.2, “Managing Disk Images with qemu-img”), it is time to start the related virtual machine. Section 27.1, “Basic Installation with qemu-system-ARCH” introduced simple commands to install and run a VM Guest. …
When QEMU is running, a monitor console is provided for performing interaction with the user. Using the commands available in the monitor console, it is possible to inspect the running operating system, change removable media, take screenshots or audio grabs and control other aspects of the virtual …
QEMU is a fast, cross-platform open source machine emulator which can emulate a huge number of hardware architectures for you. QEMU lets you run a complete unmodified operating system (VM Guest) on top of your existing system (VM Host Server).
You can also use QEMU for debugging purposes—you can easily stop your running virtual machine, inspect its state and save and restore it later.
QEMU consists of the following parts:
processor emulator (x86, z Systems, PowerPC, Sparc)
emulated devices (graphic card, network card, hard disks, mice)
generic devices used to connect the emulated devices to the related host devices
descriptions of the emulated machines (PC, Power Mac)
debugger
user interface used to interact with the emulator
QEMU is central to KVM and Xen Virtualization, where it provides the general machine emulation. Xen's usage of QEMU is somewhat hidden from the user, while KVM's usage exposes most QEMU features transparently. If the VM Guest hardware architecture is the same as the VM Host Server's architecture, QEMU can take advantage of the KVM acceleration (SUSE only supports QEMU with the KVM acceleration loaded).
Apart from providing a core virtualization infrastructure and processor-specific drivers, QEMU also provides an architecture-specific user space program for managing VM Guests. Depending on the architecture this program is one of:
qemu-system-i386
qemu-system-s390x
qemu-system-x86_64
In the following this command is called qemu-system-ARCH; in
examples the qemu-system-x86_64 command is used.
This section documents how to set up and use openSUSE Leap 42.3 as a QEMU-KVM based virtual machine host.
In general, the virtual guest system needs the same hardware resources as when installed on a physical machine. The more guests you plan to run on the host system, the more hardware resources—CPU, disk, memory, and network—you need to add to the VM Host Server.
To run KVM, your CPU must support virtualization, and
virtualization needs to be enabled in BIOS. The file
/proc/cpuinfo includes information about your CPU
features.
The KVM host requires several packages to be installed. To install all necessary packages, do the following:
Run › › .
Select and preferably also , and confirm with .
During the installation process, you can optionally let YaST create a for you automatically. If you do not plan to dedicate an additional physical network card to your virtual guests, network bridge is a standard way to connect the guest machines to the network.
After all the required packages are installed (and new network setup
activated), try to load the KVM kernel module relevant for your CPU
type—kvm-intel or
kvm-amd:
root # modprobe kvm-intelCheck if the module is loaded into memory:
tux > lsmod | grep kvm
kvm_intel 64835 6
kvm 411041 1 kvm_intelNow the KVM host is ready to serve KVM VM Guests. For more information, see Chapter 28, Running Virtual Machines with qemu-system-ARCH.
You can improve the performance of KVM-based VM Guests by letting them fully use specific features of the VM Host Server's hardware (paravirtualization). This section introduces techniques to make the guests access the physical host's hardware directly—without the emulation layer—to make the most use of it.
Examples included in this section assume basic knowledge of the
qemu-system-ARCH command
line options. For more information, see
Chapter 28, Running Virtual Machines with qemu-system-ARCH.
virtio-scsi #
virtio-scsi is an advanced storage stack for
KVM. It replaces the former virtio-blk stack
for SCSI devices pass-through. It has several advantages over
virtio-blk:
KVM guests have a limited number of PCI controllers, which results
in a limited number of possibly attached devices.
virtio-scsi solves this limitation by
grouping multiple storage devices on a single controller. Each device
on a virtio-scsi controller is represented
as a logical unit, or LUN.
virtio-blk uses a small set of
commands that need to be known to both the
virtio-blk driver and the virtual machine
monitor, and so introducing a new command requires updating both the
driver and the monitor.
By comparison, virtio-scsi does not define
commands, but rather a transport protocol for these commands following
the industry-standard SCSI specification. This approach is shared with
other technologies, such as Fibre Channel, ATAPI, and USB devices.
virtio-blk devices are presented inside the
guest as /dev/vdX,
which is different from device
names in physical systems and may cause migration problems.
virtio-scsi keeps the device names identical
to those on physical systems, making the virtual machines easily
relocatable.
For virtual disks backed by a whole LUN on the host, it is preferable
for the guest to send SCSI commands directly to the LUN
(pass-through). This is limited in
virtio-blk, as guests need to use the
virtio-blk protocol instead of SCSI command pass-through, and,
moreover, it is not available for Windows guests.
virtio-scsi natively removes these
limitations.
virtio-scsi Usage #
KVM supports the SCSI pass-through feature with the
virtio-scsi-pci device:
root # qemu-system-x86_64 [...] \
-device virtio-scsi-pci,id=scsivhost-net #
The vhost-net module is used to accelerate
KVM's paravirtualized network drivers. It provides better latency and
greater network throughput. Use the vhost-net
driver by starting the guest with the following example command line:
root # qemu-system-x86_64 [...] \
-netdev tap,id=guest0,vhost=on,script=no \
-net nic,model=virtio,netdev=guest0,macaddr=00:16:35:AF:94:4B
Note that guest0 is an identification string of the
vhost-driven device.
As the number of virtual CPUs increases in VM Guests, QEMU offers a way of improving the network performance using multiqueue. Multiqueue virtio-net scales the network performance by allowing VM Guest virtual CPUs to transfer packets in parallel. Multiqueue support is required on both the VM Host Server and VM Guest sides.
The multiqueue virtio-net solution is most beneficial in the following cases:
Network traffic packets are large.
VM Guest has many connections active at the same time, mainly between the guest systems, or between the guest and the host, or between the guest and an external system.
The number of active queues is equal to the number of virtual CPUs in the VM Guest.
While multiqueue virtio-net increases the total network throughput, it increases CPU consumption as it uses of the virtual CPU's power.
The following procedure lists important steps to enable the multiqueue
feature with qemu-system-ARCH. It assumes that a tap
network device with multiqueue capability (supported since kernel
version 3.8) is set up on the VM Host Server.
In qemu-system-ARCH, enable multiqueue for the tap
device:
-netdev tap,vhost=on,queues=2*N
where N stands for the number of queue pairs.
In qemu-system-ARCH, enable multiqueue and specify
MSI-X (Message Signaled Interrupt) vectors for the virtio-net-pci
device:
-device virtio-net-pci,mq=on,vectors=2*N+2
where the formula for the number of MSI-X vectors results from: N vectors for TX (transmit) queues, N for RX (receive) queues, one for configuration purposes, and one for possible VQ (vector quantization) control.
In VM Guest, enable multiqueue on the relevant network interface
(eth0 in this example):
tux >sudoethtool -L eth0 combined 2*N
The resulting qemu-system-ARCH command line will look
similar to the following example:
qemu-system-x86_64 [...] -netdev tap,id=guest0,queues=8,vhost=on \ -device virtio-net-pci,netdev=guest0,mq=on,vectors=10
Note that the id of the network device
(guest0 ) needs to be identical for both options.
Inside the running VM Guest, specify the following command with
root privileges:
tux >sudoethtool -L eth0 combined 8
Now the guest system networking uses the multiqueue support from the
qemu-system-ARCH hypervisor.
Directly assigning a PCI device to a VM Guest (PCI pass-through) avoids performance issues caused by avoiding any emulation in performance-critical paths. VFIO replaces the traditional KVM PCI Pass-Through device assignment. A prerequisite for this feature is a VM Host Server configuration as described in Important: Requirements for VFIO and SR-IOV.
To be able to assign a PCI device via VFIO to a VM Guest, you need to find out which IOMMU Group it belongs to. The IOMMU (input/output memory management unit that connects a direct memory access-capable I/O bus to the main memory) API supports the notion of groups. A group is a set of devices that can be isolated from all other devices in the system. Groups are therefore the unit of ownership used by VFIO.
Identify the host PCI device to assign to the guest.
tux >sudolspci -nn [...] 00:10.0 Ethernet controller [0200]: Intel Corporation 82576 \ Virtual Function [8086:10ca] (rev 01) [...]
Note down the device ID (00:10.0 in this case) and
the vendor ID (8086:10ca).
Find the IOMMU group of this device:
tux >sudoreadlink /sys/bus/pci/devices/0000\:00\:10.0/iommu_group ../../../kernel/iommu_groups/20
The IOMMU group for this device is 20. Now you can
check the devices belonging to the same IOMMU group:
tux >sudols -l /sys/bus/pci/devices/0000:01:10.0/iommu_group/devices/0000:01:10.0 [...] 0000:00:1e.0 -> ../../../../devices/pci0000:00/0000:00:1e.0 [...] 0000:01:10.0 -> ../../../../devices/pci0000:00/0000:00:1e.0/0000:01:10.0 [...] 0000:01:10.1 -> ../../../../devices/pci0000:00/0000:00:1e.0/0000:01:10.1
Unbind the device from the device driver:
tux >sudoecho "0000:01:10.0" > /sys/bus/pci/devices/0000\:01\:10.0/driver/unbind
Bind the device to the vfio-pci driver using the vendor ID from step 1:
tux >sudoecho "8086 153a" > /sys/bus/pci/drivers/vfio-pci/new_id
A new device
/dev/vfio/IOMMU_GROUP
will be created as a result, /dev/vfio/20 in this
case.
Change the ownership of the newly created device:
tux >sudochown qemu.qemu /dev/vfio/DEVICE
Now run the VM Guest with the PCI device assigned.
tux >sudoqemu-system-ARCH [...] -device vfio-pci,host=00:10.0,id=ID
As of openSUSE Leap 42.3 hotplugging of PCI devices passed to a VM Guest via VFIO is not supported.
You can find more detailed information on the
VFIO driver in the
/usr/src/linux/Documentation/vfio.txt file (package
kernel-source needs to be installed).
VM Guests usually run in a separate computing space—they are provided their own memory range, dedicated CPUs, and file system space. The ability to share parts of the VM Host Server's file system makes the virtualization environment more flexible by simplifying mutual data exchange. Network file systems, such as CIFS and NFS, have been the traditional way of sharing directories. But as they are not specifically designed for virtualization purposes, they suffer from major performance and feature issues.
KVM introduces a new optimized method called VirtFS (sometimes called “file system pass-through”). VirtFS uses a paravirtual file system driver, which avoids converting the guest application file system operations into block device operations, and then again into host file system operations.
You typically use VirtFS for the following situations:
To access a shared directory from several guests, or to provide guest-to-guest file system access.
To replace the virtual disk as the root file system to which the guest's RAM disk connects during the guest boot process.
To provide storage services to different customers from a single host file system in a cloud environment.
In QEMU, the implementation of VirtFS is simplified by defining two types of devices:
virtio-9p-pci device which transports protocol
messages and data between the host and the guest.
fsdev device which defines the export file system
properties, such as file system type and security model.
tux >sudoqemu-system-x86_64 [...] \ -fsdev local,id=exp11,path=/tmp/2,security_model=mapped3 \ -device virtio-9p-pci,fsdev=exp14,mount_tag=v_tmp5
Identification of the file system to be exported. | |
File system path on the host to be exported. | |
Security model to be used— | |
The exported file system ID defined before with | |
Mount tag used later on the guest to mount the exported file system. |
Such an exported file system can be mounted on the guest as follows:
tux >sudomount -t 9p -o trans=virtio v_tmp /mnt
where v_tmp is the mount tag defined earlier with
-device mount_tag= and /mnt is
the mount point where you want to mount the exported file system.
Kernel Same Page Merging (KSM) is a Linux kernel feature that merges identical memory pages from multiple running processes into one memory region. Because KVM guests run as processes under Linux, KSM provides the memory overcommit feature to hypervisors for more efficient use of memory. Therefore, if you need to run multiple virtual machines on a host with limited memory, KSM may be helpful to you.
KSM stores its status information in
the files under the /sys/kernel/mm/ksm directory:
tux > ls -1 /sys/kernel/mm/ksm
full_scans
merge_across_nodes
pages_shared
pages_sharing
pages_to_scan
pages_unshared
pages_volatile
run
sleep_millisecs
For more information on the meaning of the
/sys/kernel/mm/ksm/* files, see
/usr/src/linux/Documentation/vm/ksm.txt (package
kernel-source).
To use KSM, do the following.
Although openSUSE Leap includes KSM support in the kernel, it is disabled by default. To enable it, run the following command:
root # echo 1 > /sys/kernel/mm/ksm/run
Now run several VM Guests under KVM and inspect the content of
files pages_sharing and
pages_shared, for example:
tux > while [ 1 ]; do cat /sys/kernel/mm/ksm/pages_shared; sleep 1; done
13522
13523
13519
13518
13520
13520
13528
The libvirt-based tools such as
virt-manager and virt-install
offer convenient interfaces to set up and
manage virtual machines. They act as a kind of wrapper for the
qemu-system-ARCHcommand. However, it is also possible to
use qemu-system-ARCH directly without using
libvirt-based tools.
qemu-system-ARCH and libvirt
Virtual Machines created with
qemu-system-ARCH are not "visible" for the
libvirt-based tools.
qemu-system-ARCH #In the following example, a virtual machine for a SUSE Linux Enterprise Server 11 installation is created. For detailed information on the commands, refer to the respective man pages.
If you do not already have an image of a system that you want to run in a virtualized environment, you need to create one from the installation media. In such case, you need to prepare a hard disk image, and obtain an image of the installation media or the media itself.
Create a hard disk with qemu-img.
tux > qemu-img create1 -f raw2 /images/sles/hda3 8G4
The subcommand | |
Specify the disk's format with the | |
The full path to the image file. | |
The size of the image—8 GB in this case. The image is created as a Sparse image file file that grows when the disk is filled with data. The specified size defines the maximum size to which the image file can grow. |
After at least one hard disk image is created, you can set up a virtual
machine with qemu-system-ARCH that will boot into the
installation system:
root # qemu-system-x86_64 -name "sles"1-machine accel=kvm -M pc2 -m 7683 \
-smp 24 -boot d5 \
-drive file=/images/sles/hda,if=virtio,index=0,media=disk,format=raw6 \
-drive file=/isos/SLES-11-SP3-DVD-x86_64-GM-DVD1.iso,index=1,media=cdrom7 \
-net nic,model=virtio,macaddr=52:54:00:05:11:118 -net user \
-vga cirrus9 -balloon virtio10Name of the virtual machine that will be displayed in the window caption and be used for the VNC server. This name must be unique. | |
Specifies the machine type. Use | |
Maximum amount of memory for the virtual machine. | |
Defines an SMP system with two processors. | |
Specifies the boot order. Valid values are | |
Defines the first ( | |
The second ( | |
Defines a paravirtualized ( | |
Specifies the graphic card. If you specify
| |
Defines the paravirtualized balloon device that allows to dynamically
change the amount of memory (up to the maximum value specified with the
parameter |
After the installation of the guest operating system finishes, you can start the related virtual machine without the need to specify the CD-ROM device:
root # qemu-system-x86_64 -name "sles" -machine type=pc,accel=kvm -m 768 \
-smp 2 -boot c \
-drive file=/images/sles/hda,if=virtio,index=0,media=disk,format=raw \
-net nic,model=virtio,macaddr=52:54:00:05:11:11 \
-vga cirrus -balloon virtioqemu-img #
In the previous section (see
Section 27.1, “Basic Installation with qemu-system-ARCH”), we used the
qemu-img command to create an image of a hard disk. You
can, however, use qemu-img for general disk image
manipulation. This section introduces qemu-img
subcommands to help manage the disk images flexibly.
qemu-img uses subcommands (like
zypper does) to do specific tasks. Each subcommand
understands a different set of options. Some options are general and used
by more of these subcommands, while some are unique to the related
subcommand. See the qemu-img manual page (man 1
qemu-img) for a list of all supported options.
qemu-img uses the following general syntax:
tux > qemu-img subcommand [options]and supports the following subcommands:
create
Creates a new disk image on the file system.
check
Checks an existing disk image for errors.
compare
Check if two images have the same content.
map
Dumps the metadata of the image file name and its backing file chain.
amend
Amends the image format specific options for the image file name.
convert
Converts an existing disk image to a new one in a different format.
info
Displays information about the relevant disk image.
snapshot
Manages snapshots of existing disk images.
commit
Applies changes made to an existing disk image.
rebase
Creates a new base image based on an existing image.
resize
Increases or decreases the size of an existing image.
This section describes how to create disk images, check their condition, convert a disk image from one format to another, and get detailed information about a particular disk image.
Use qemu-img create to create a new disk image for your
VM Guest operating system. The command uses the following syntax:
tux > qemu-img create -f fmt1 -o options2 fname3 size4
The format of the target image. Supported formats are
| |
Some image formats support additional options to be passed on the
command line. You can specify them here with the | |
Path to the target disk image to be created. | |
Size of the target disk image (if not already specified with the
|
To create a new disk image sles.raw in the directory
/images growing up to a maximum size of 4 GB, run the
following command:
tux >qemu-img create -f raw -o size=4G /images/sles.raw Formatting '/images/sles.raw', fmt=raw size=4294967296tux >ls -l /images/sles.raw -rw-r--r-- 1 tux users 4294967296 Nov 15 15:56 /images/sles.rawtux >qemu-img info /images/sles.raw image: /images/sles11.raw file format: raw virtual size: 4.0G (4294967296 bytes) disk size: 0
As you can see, the virtual size of the newly created image is 4 GB, but the actual reported disk size is 0 as no data has been written to the image yet.
If you need to create a disk image on the Btrfs file system, you can use
nocow=on to reduce the performance overhead created by
the copy-on-write feature of Btrfs:
tux > qemu-img create -o nocow=on test.img 8G
If you, however, want to use copy-on-write (for example for creating
snapshots or sharing them across virtual machines), then leave the
command line without the nocow option.
Use qemu-img convert to convert disk images to another
format. To get a complete list of image formats supported by QEMU, run
qemu-img -h and look at the last line
of the output. The command uses the following syntax:
tux > qemu-img convert -c1 -f fmt2 -O out_fmt3 -o options4 fname5 out_fname6
Applies compression on the target disk image. Only
| |
The format of the source disk image. It is usually autodetected and can therefore be omitted. | |
The format of the target disk image. | |
Specify additional options relevant for the target image format. Use
| |
Path to the source disk image to be converted. | |
Path to the converted target disk image. |
tux >qemu-img convert -O vmdk /images/sles.raw \ /images/sles.vmdktux >ls -l /images/ -rw-r--r-- 1 tux users 4294967296 16. lis 10.50 sles.raw -rw-r--r-- 1 tux users 2574450688 16. lis 14.18 sles.vmdk
To see a list of options relevant for the selected target image format,
run the following command (replace vmdk with your image
format):
tux > qemu-img convert -O vmdk /images/sles.raw \
/images/sles.vmdk -o ?
Supported options:
size Virtual disk size
backing_file File name of a base image
compat6 VMDK version 6 image
subformat VMDK flat extent format, can be one of {monolithicSparse \
(default) | monolithicFlat | twoGbMaxExtentSparse | twoGbMaxExtentFlat}
scsi SCSI image
Use qemu-img check to check the existing disk image for
errors. Not all disk image formats support this feature. The command uses
the following syntax:
tux > qemu-img check -f fmt1 fname2The format of the source disk image. It is usually autodetected and can therefore be omitted. | |
Path to the source disk image to be checked. |
If no error is found, the command returns no output. Otherwise, the type and number of errors found is shown.
tux > qemu-img check -f qcow2 /images/sles.qcow2
ERROR: invalid cluster offset=0x2af0000
[...]
ERROR: invalid cluster offset=0x34ab0000
378 errors were found on the image.When creating a new image, you must specify its maximum size before the image is created (see Section 27.2.2.1, “qemu-img create”). After you have installed the VM Guest and have been using it for some time, the initial size of the image may no longer be sufficient. In that case, add more space to it.
To increase the size of an existing disk image by 2 gigabytes, use:
tux > qemu-img resize /images/sles.raw +2GB
You can resize the disk image using the formats raw,
qcow2 and qed. To resize an image
in another format, convert it to a supported format with
qemu-img convert first.
The image now contains an empty space of 2 GB after the final partition. You can resize the existing partitions or add new ones.
qcow2 is the main disk image format used by QEMU. Its size grows on demand, and the disk space is only allocated when it is actually needed by the virtual machine.
A qcow2 formatted file is organized in units of constant size. These units are called clusters. Viewed from the guest side, the virtual disk is also divided into clusters of the same size. QEMU defaults to 64 kB clusters, but you can specify a different value when creating a new image:
tux > qemu-img create -f qcow2 -o cluster_size=128K virt_disk.qcow2 4GA qcow2 image contains a set of tables organized in two levels that are called the L1 and L2 tables. There is just one L1 table per disk image, while there can be many L2 tables depending on how big the image is.
To read or write data to the virtual disk, QEMU needs to read its corresponding L2 table to find out the relevant data location. Because reading the table for each I/O operation consumes system resources, QEMU keeps a cache of L2 tables in memory to speed up disk access.
The cache size relates to the amount of allocated space. L2 cache can map the following amount of virtual disk:
disk_size = l2_cache_size * cluster_size / 8
With the default 64 kB of cluster size, that is
disk_size = l2_cache_size * 8192
Therefore, to have a cache that maps
n gigabytes of disk space with the default cluster
size, you need
l2_cache_size = disk_size_GB * 131072
QEMU uses 1 MB (1048576 bytes) of L2 cache by default. Following the above formulas, 1 MB of L2 cache covers 8 GB (1048576 / 131072) of virtual disk. This means that the performance is fine with the default L2 cache size if your virtual disk size is up to 8 GB. For larger disks, you can speed up the disk access by increasing the L2 cache size.
You can use the -drive option on the QEMU command line
to specify the cache sizes. Alternatively when communicating via QMP, use
the blockdev-add command. For more information on QMP,
see Section 29.11, “QMP - QEMU Machine Protocol”.
The following options configure the cache size for the virtual guest:
The maximum size of the L2 table cache.
The maximum size of the refcount block cache. For more information on refcount, see https://github.com/qemu/qemu/blob/master/docs/specs/qcow2.txt.
The maximum size of both caches combined.
When specifying values for the options above, be aware of the following:
The size of both the L2 and refcount block caches needs to be a multiple of the cluster size.
If you only set one of the options, QEMU will automatically adjust the other options so that the L2 cache is 4 times bigger than the refcount cache.
The refcount cache is used much less often than the L2 cache, therefore you can keep it relatively small:
root # qemu-system-ARCH [...] \
-drive file=disk_image.qcow2,l2-cache-size=4194304,refcount-cache-size=262144The larger the cache, the more memory it consumes. There is a separate L2 cache for each qcow2 file. When using a lot of big disk images, you will probably need a considerably large amount of memory. Memory consumption is even worse if you add backing files (Section 27.2.4, “Manipulate Disk Images Effectively”) and snapshots (see Section 27.2.3, “Managing Snapshots of Virtual Machines with qemu-img”) to the guest's setup chain.
That is why QEMU introduced the cache-clean-interval
setting. It defines an interval in seconds after which all cache entries
that have not been accessed are removed from memory.
The following example removes all unused cache entries every 10 minutes:
root # qemu-system-ARCH [...] -drive file=hd.qcow2,cache-clean-interval=600If this option is not set, the default value is 0 and it disables this feature.
Virtual Machine snapshots are snapshots of the complete environment in which a VM Guest is running. The snapshot includes the state of the processor (CPU), memory (RAM), devices, and all writable disks.
Snapshots are helpful when you need to save your virtual machine in a particular state. For example, after you configured network services on a virtualized server and want to quickly start the virtual machine in the same state you last saved it. Or you can create a snapshot after the virtual machine has been powered off to create a backup state before you try something experimental and possibly make VM Guest unstable. This section introduces the latter case, while the former is described in Chapter 29, Virtual Machine Administration Using QEMU Monitor.
To use snapshots, your VM Guest must contain at least one writable hard
disk image in qcow2 format. This device is usually the
first virtual hard disk.
Virtual Machine snapshots are created with the
savevm command in the interactive QEMU monitor. To
make identifying a particular snapshot easier, you can assign it a
tag. For more information on QEMU monitor, see
Chapter 29, Virtual Machine Administration Using QEMU Monitor.
Once your qcow2 disk image contains saved snapshots, you
can inspect them with the qemu-img snapshot command.
Do not create or delete virtual machine snapshots with the
qemu-img snapshot command while the virtual machine is
running. Otherwise, you may damage the disk image with the state of the
virtual machine saved.
Use qemu-img snapshot -l
DISK_IMAGE to view a list of all existing
snapshots saved in the disk_image image. You can get
the list even while the VM Guest is running.
tux > qemu-img snapshot -l /images/sles.qcow2
Snapshot list:
ID1 TAG2 VM SIZE3 DATE4 VM CLOCK5
1 booting 4.4M 2013-11-22 10:51:10 00:00:20.476
2 booted 184M 2013-11-22 10:53:03 00:02:05.394
3 logged_in 273M 2013-11-22 11:00:25 00:04:34.843
4 ff_and_term_running 372M 2013-11-22 11:12:27 00:08:44.965Unique identification number of the snapshot. Usually auto-incremented. | |
Unique description string of the snapshot. It is meant as a human-readable version of the ID. | |
The disk space occupied by the snapshot. Note that the more memory is consumed by running applications, the bigger the snapshot is. | |
Time and date the snapshot was created. | |
The current state of the virtual machine's clock. |
Use qemu-img snapshot -c
SNAPSHOT_TITLE
DISK_IMAGE to create a snapshot of the current
state of a virtual machine that was previously powered off.
tux > qemu-img snapshot -c backup_snapshot /images/sles.qcow2tux > qemu-img snapshot -l /images/sles.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 booting 4.4M 2013-11-22 10:51:10 00:00:20.476
2 booted 184M 2013-11-22 10:53:03 00:02:05.394
3 logged_in 273M 2013-11-22 11:00:25 00:04:34.843
4 ff_and_term_running 372M 2013-11-22 11:12:27 00:08:44.965
5 backup_snapshot 0 2013-11-22 14:14:00 00:00:00.000If something breaks in your VM Guest and you need to restore the state of the saved snapshot (ID 5 in our example), power off your VM Guest and execute the following command:
tux > qemu-img snapshot -a 5 /images/sles.qcow2
The next time you run the virtual machine with
qemu-system-ARCH, it will be in the state of snapshot
number 5.
The qemu-img snapshot -c command is not related to the
savevm command of QEMU monitor (see
Chapter 29, Virtual Machine Administration Using QEMU Monitor). For example, you cannot apply a
snapshot with qemu-img snapshot -a on a snapshot
created with savevm in QEMU's monitor.
Use qemu-img snapshot -d
SNAPSHOT_ID
DISK_IMAGE to delete old or unneeded snapshots
of a virtual machine. This saves some disk space inside the
qcow2 disk image as the space occupied by the snapshot
data is restored:
tux > qemu-img snapshot -d 2 /images/sles.qcow2Imagine the following real-life situation: you are a server administrator who runs and manages several virtualized operating systems. One group of these systems is based on one specific distribution, while another group (or groups) is based on different versions of the distribution or even on a different (and maybe non-Unix) platform. To make the case even more complex, individual virtual guest systems based on the same distribution usually differ according to the department and deployment. A file server typically uses a different setup and services than a Web server does, while both may still be based on openSUSE.
With QEMU it is possible to create “base” disk images. You can use them as template virtual machines. These base images will save you plenty of time because you will never need to install the same operating system more than once.
First, build a disk image as usual and install the target system on it.
For more information, see Section 27.1, “Basic Installation with qemu-system-ARCH”
and Section 27.2.2, “Creating, Converting and Checking Disk Images”. Then build a
new image while using the first one as a base image. The base image is
also called a backing file. After your new
derived image is built, never boot the base image
again, but boot the derived image instead. Several derived images may
depend on one base image at the same time. Therefore, changing the base
image can damage the dependencies. While using your derived image, QEMU
writes changes to it and uses the base image only for reading.
It is a good practice to create a base image from a freshly installed (and, if needed, registered) operating system with no patches applied and no additional applications installed or removed. Later on, you can create another base image with the latest patches applied and based on the original base image.
While you can use the raw format for base images, you
cannot use it for derived images because the raw
format does not support the backing_file option. Use
for example the qcow2 format for the derived images.
For example, /images/sles_base.raw is the base image
holding a freshly installed system.
tux > qemu-img info /images/sles_base.raw
image: /images/sles_base.raw
file format: raw
virtual size: 4.0G (4294967296 bytes)
disk size: 2.4G
The image's reserved size is 4 GB, the actual size is 2.4 GB, and its
format is raw. Create an image derived from the
/images/sles_base.raw base image with:
tux > qemu-img create -f qcow2 /images/sles_derived.qcow2 \
-o backing_file=/images/sles_base.raw
Formatting '/images/sles_derived.qcow2', fmt=qcow2 size=4294967296 \
backing_file='/images/sles_base.raw' encryption=off cluster_size=0Look at the derived image details:
tux > qemu-img info /images/sles_derived.qcow2
image: /images/sles_derived.qcow2
file format: qcow2
virtual size: 4.0G (4294967296 bytes)
disk size: 140K
cluster_size: 65536
backing file: /images/sles_base.raw \
(actual path: /images/sles_base.raw)Although the reserved size of the derived image is the same as the size of the base image (4 GB), the actual size is 140 KB only. The reason is that only changes made to the system inside the derived image are saved. Run the derived virtual machine, register it, if needed, and apply the latest patches. Do any other changes in the system such as removing unneeded or installing new software packages. Then shut the VM Guest down and examine its details once more:
tux > qemu-img info /images/sles_derived.qcow2
image: /images/sles_derived.qcow2
file format: qcow2
virtual size: 4.0G (4294967296 bytes)
disk size: 1.1G
cluster_size: 65536
backing file: /images/sles_base.raw \
(actual path: /images/sles_base.raw)
The disk size value has grown to 1.1 GB, which is the
disk space occupied by the changes on the file system compared to the base
image.
After you have modified the derived image (applied patches, installed specific applications, changed environment settings, etc.), it reaches the desired state. At that point, you can merge the original base image and the derived image to create a new base image.
Your original base image (/images/sles_base.raw)
holds a freshly installed system. It can be a template for new modified
base images, while the new one can contain the same system as the first
one plus all security and update patches applied, for example. After you
have created this new base image, you can use it as a template for more
specialized derived images as well. The new base image becomes independent
of the original one. The process of creating base images from derived ones
is called rebasing:
tux > qemu-img convert /images/sles_derived.qcow2 \
-O raw /images/sles_base2.raw
This command created the new base image
/images/sles_base2.raw using the
raw format.
tux > qemu-img info /images/sles_base2.raw
image: /images/sles11_base2.raw
file format: raw
virtual size: 4.0G (4294967296 bytes)
disk size: 2.8GThe new image is 0.4 gigabytes bigger than the original base image. It uses no backing file, and you can easily create new derived images based upon it. This lets you create a sophisticated hierarchy of virtual disk images for your organization, saving a lot of time and work.
It can be useful to mount a virtual disk image under the host system. It is strongly recommended to read Chapter 16, libguestfs and use dedicated tools to access a virtual machine image. However, if you need to do this manually, follow this guide.
Linux systems can mount an internal partition of a raw
disk image using a loopback device. The first example procedure is more
complex but more illustrative, while the second one is straightforward:
Set a loop device on the disk image whose partition you want to mount.
tux > losetup /dev/loop0 /images/sles_base.rawFind the sector size and the starting sector number of the partition you want to mount.
tux > fdisk -lu /dev/loop0
Disk /dev/loop0: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 5121 bytes
Disk identifier: 0x000ceca8
Device Boot Start End Blocks Id System
/dev/loop0p1 63 1542239 771088+ 82 Linux swap
/dev/loop0p2 * 15422402 8385929 3421845 83 LinuxCalculate the partition start offset:
sector_size * sector_start = 512 * 1542240 = 789626880
Delete the loop and mount the partition inside the disk image with the calculated offset on a prepared directory.
tux >losetup -d /dev/loop0tux >mount -o loop,offset=789626880 \ /images/sles_base.raw /mnt/sles/tux >ls -l /mnt/sles/ total 112 drwxr-xr-x 2 root root 4096 Nov 16 10:02 bin drwxr-xr-x 3 root root 4096 Nov 16 10:27 boot drwxr-xr-x 5 root root 4096 Nov 16 09:11 dev [...] drwxrwxrwt 14 root root 4096 Nov 24 09:50 tmp drwxr-xr-x 12 root root 4096 Nov 16 09:16 usr drwxr-xr-x 15 root root 4096 Nov 16 09:22 var
Copy one or more files onto the mounted partition and unmount it when finished.
tux >cp /etc/X11/xorg.conf /mnt/sles/root/tmptux >ls -l /mnt/sles/root/tmptux >umount /mnt/sles/
Never mount a partition of an image of a running virtual machine in a
read-write mode. This could corrupt the partition and
break the whole VM Guest.
Once you have a virtual disk image ready (for more information on disk
images, see Section 27.2, “Managing Disk Images with qemu-img”), it is time to
start the related virtual machine.
Section 27.1, “Basic Installation with qemu-system-ARCH” introduced simple commands to
install and run a VM Guest. This chapter focuses on a more detailed
explanation of qemu-system-ARCH usage, and shows solutions
for more specific tasks. For a complete list of
qemu-system-ARCH's options, see its manual page
(man 1 qemu).
qemu-system-ARCH Invocation #
The qemu-system-ARCH command uses the following syntax:
qemu-system-ARCH options1 disk_img2
| |
Path to the disk image holding the guest system you want to virtualize.
|
qemu-system-ARCH Options #
This section introduces general qemu-system-ARCH options
and options related to the basic emulated hardware, such as the virtual
machine's processor, memory, model type, or time processing methods.
-name NAME_OF_GUEST
Specifies the name of the running guest system. The name is displayed in the window caption and used for the VNC server.
-boot OPTIONS
Specifies the order in which the defined drives will be booted. Drives
are represented by letters, where a and
b stand for the floppy drives 1 and 2,
c stands for the first hard disk, d
stands for the first CD-ROM drive, and n to
p stand for Ether-boot network adapters.
For example, qemu-system-ARCH [...] -boot order=ndc
first tries to boot from network, then from the first CD-ROM drive, and
finally from the first hard disk.
-pidfile FILENAME
Stores the QEMU's process identification number (PID) in a file. This is useful if you run QEMU from a script.
-nodefaults
By default QEMU creates basic virtual devices even if you do not specify them on the command line. This option turns this feature off, and you must specify every single device manually, including graphical and network cards, parallel or serial ports, or virtual consoles. Even QEMU monitor is not attached by default.
-daemonize
“Daemonizes” the QEMU process after it is started. QEMU will detach from the standard input and standard output after it is ready to receive connections on any of its devices.
SeaBIOS is the default BIOS used. You can boot USB devices, any drive (CD-ROM, Floppy, or a hard disk). It has USB mouse and keyboard support and supports multiple VGA cards. For more information about SeaBIOS, refer to the SeaBIOS Website.
You can specifies the type of the emulated machine. Run
qemu-system-ARCH -M help to view a list of supported
machine types.
The machine type isapc: ISA-only-PC is unsupported.
To specify the type of the processor (CPU) model, run
qemu-system-ARCH -cpu MODEL.
Use qemu-system-ARCH -cpu help to view a list of
supported CPU models.
CPU flags information can be found at CPUID Wikipedia.
The following is a list of most commonly used options while launching qemu from command line. To see all options available refer to qemu-doc man page.
-m MEGABYTES
Specifies how many megabytes are used for the virtual RAM size.
-balloon virtio
Specifies a paravirtualized device to dynamically change the amount of
virtual RAM memory assigned to VM Guest. The top limit is the amount
of memory specified with -m.
-smp NUMBER_OF_CPUS
Specifies how many CPUs will be emulated. QEMU supports up to 255 CPUs on the PC platform (up to 64 with KVM acceleration used). This option also takes other CPU-related parameters, such as number of sockets, number of cores per socket, or number of threads per core.
The following is an example of a working
qemu-system-ARCH command line:
tux > qemu-system-x86_64 -name "SLES 12 SP2" -M pc-i440fx-2.7 -m 512 \
-machine accel=kvm -cpu kvm64 -smp 2 -drive /images/sles.raw-no-acpi
Disables ACPI support.
-S
QEMU starts with CPU stopped. To start CPU, enter
c in QEMU monitor. For more information, see
Chapter 29, Virtual Machine Administration Using QEMU Monitor.
-readconfig CFG_FILE
Instead of entering the devices configuration options on the command
line each time you want to run VM Guest,
qemu-system-ARCH can read it from a file that was
either previously saved with -writeconfig or edited
manually.
-writeconfig CFG_FILE
Dumps the current virtual machine's devices configuration to a text
file. It can be consequently re-used with the
-readconfig option.
tux >qemu-system-x86_64 -name "SLES 12 SP2" -machine accel=kvm -M pc-i440fx-2.7 -m 512 -cpu kvm64 \ -smp 2 /images/sles.raw -writeconfig /images/sles.cfg (exited)tux >cat /images/sles.cfg # qemu config file [drive] index = "0" media = "disk" file = "/images/sles_base.raw"
This way you can effectively manage the configuration of your virtual machines' devices in a well-arranged way.
-rtc OPTIONS
Specifies the way the RTC is handled inside a VM Guest. By default, the clock of the guest is derived from that of the host system. Therefore, it is recommended that the host system clock is synchronized with an accurate external clock (for example, via NTP service).
If you need to isolate the VM Guest clock from the host one, specify
clock=vm instead of the default
clock=host.
You can also specify the initial time of the VM Guest's clock with the
base option:
tux > qemu-system-x86_64 [...] -rtc clock=vm,base=2010-12-03T01:02:00
Instead of a time stamp, you can specify utc or
localtime. The former instructs VM Guest to start at
the current UTC value (Coordinated Universal Time, see
http://en.wikipedia.org/wiki/UTC), while the latter
applies the local time setting.
QEMU virtual machines emulate all devices needed to run a VM Guest. QEMU supports, for example, several types of network cards, block devices (hard and removable drives), USB devices, character devices (serial and parallel ports), or multimedia devices (graphic and sound cards). This section introduces options to configure various types of supported devices.
If your device, such as -drive, needs a special driver
and driver properties to be set, specify them with the
-device option, and identify with
drive= suboption. For example:
tux >sudoqemu-system-x86_64 [...] -drive if=none,id=drive0,format=raw \ -device virtio-blk-pci,drive=drive0,scsi=off ...
To get help on available drivers and their properties, use -device
? and -device
DRIVER,?.
Block devices are vital for virtual machines. In general, these are fixed or removable storage media usually called drives. One of the connected hard disks typically holds the guest operating system to be virtualized.
Virtual Machine drives are defined with
-drive. This option has many sub-options, some of which
are described in this section. For the complete list, see the manual page
(man 1 qemu).
-drive Option #file=image_fname
Specifies the path to the disk image that will be used with this drive. If not specified, an empty (removable) drive is assumed.
if=drive_interface
Specifies the type of interface to which the drive is connected.
Currently only floppy, scsi,
ide, or virtio are supported by
SUSE. virtio defines a paravirtualized disk driver.
Default is ide.
index=index_of_connector
Specifies the index number of a connector on the disk interface (see the
if option) where the drive is connected. If not
specified, the index is automatically incremented.
media=type
Specifies the type of media. Can be disk for hard
disks, or cdrom for removable CD-ROM drives.
format=img_fmt
Specifies the format of the connected disk image. If not specified, the
format is autodetected. Currently, SUSE supports
qcow2, qed and
raw formats.
cache=method
Specifies the caching method for the drive. Possible values are
unsafe, writethrough,
writeback, directsync, or
none. To improve performance when using the
qcow2 image format, select
writeback.
none disables the host page cache and, therefore, is
the safest option. Default for image files is
writeback. For more information, see
Chapter 14, Disk Cache Modes.
To simplify defining block devices, QEMU understands several shortcuts
which you may find handy when entering the
qemu-system-ARCH command line.
You can use
tux >sudoqemu-system-x86_64 -cdrom /images/cdrom.iso
instead of
tux >sudoqemu-system-x86_64 -drive file=/images/cdrom.iso,index=2,media=cdrom
and
tux >sudoqemu-system-x86_64 -hda /images/imagei1.raw -hdb /images/image2.raw -hdc \ /images/image3.raw -hdd /images/image4.raw
instead of
tux >sudoqemu-system-x86_64 -drive file=/images/image1.raw,index=0,media=disk \ -drive file=/images/image2.raw,index=1,media=disk \ -drive file=/images/image3.raw,index=2,media=disk \ -drive file=/images/image4.raw,index=3,media=disk
As an alternative to using disk images (see
Section 27.2, “Managing Disk Images with qemu-img”) you can also use existing
VM Host Server disks, connect them as drives, and access them from VM Guest.
Use the host disk device directly instead of disk image file names.
To access the host CD-ROM drive, use
tux >sudoqemu-system-x86_64 [...] -drive file=/dev/cdrom,media=cdrom
To access the host hard disk, use
tux >sudoqemu-system-x86_64 [...] -drive file=/dev/hdb,media=disk
A host drive used by a VM Guest must not be accessed concurrently by the VM Host Server or another VM Guest.
A Sparse image file is a type of disk image file that grows in size as the user adds data to it, taking up only as much disk space as is stored in it. For example, if you copy 1 GB of data inside the sparse disk image, its size grows by 1 GB. If you then delete for example 500 MB of the data, the image size does not by default decrease as expected.
That is why the discard=on option is introduced on the
KVM command line. It tells the hypervisor to automatically free the
“holes” after deleting data from the sparse guest image. Note
that this option is valid only for the if=scsi drive
interface:
tux >sudoqemu-system-x86_64 [...] -drive file=/path/to/file.img,if=scsi,discard=on
if=scsi is not supported. This interface does not map to
virtio-scsi, but rather to the lsi SCSI
adapter.
IOThreads are dedicated event loop threads for virtio devices to perform I/O requests in order to improve scalability, especially on an SMP VM Host Server with SMP VM Guests using many disk devices. Instead of using QEMU's main event loop for I/O processing, IOThreads allow spreading I/O work across multiple CPUs and can improve latency when properly configured.
IOThreads are enabled by defining IOThread objects. virtio devices can
then use the objects for their I/0 event loops. Many virtio devices can
use a single IOThread object, or virtio devices and IOThread objects
can be configured in a 1:1 mapping. The following example creates a
single IOThread with ID iothread0 which is then used
as the event loop for two virtio-blk devices.
tux > qemu-system-x86_64 [...] -object iothread,id=iothread0\
-drive if=none,id=drive0,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive0,scsi=off,\
iothread=iothread0 -drive if=none,id=drive1,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive1,scsi=off,\
iothread=iothread0 [...]The following qemu command line example illustrates a 1:1 virtio device to IOThread mapping:
tux > qemu-system-x86_64 [...] -object iothread,id=iothread0\
-object iothread,id=iothread1 -drive if=none,id=drive0,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive0,scsi=off,\
iothread=iothread0 -drive if=none,id=drive1,cache=none,aio=native,\
format=raw,file=filename -device virtio-blk-pci,drive=drive1,scsi=off,\
iothread=iothread1 [...]For better performance of I/O-intensive applications, a new I/O path was introduced for the virtio-blk interface in kernel version 3.7. This bio-based block device driver skips the I/O scheduler, and thus shortens the I/O path in guest and has lower latency. It is especially useful for high-speed storage devices, such as SSD disks.
The driver is disabled by default. To use it, do the following:
Append virtio_blk.use_bio=1 to the kernel command
line on the guest. You can do so via
› › .
You can do it also by editing /etc/default/grub,
searching for the line that contains
GRUB_CMDLINE_LINUX_DEFAULT=, and adding the kernel
parameter at the end. Then run grub2-mkconfig
>/boot/grub2/grub.cfg to update the grub2 boot menu.
Reboot the guest with the new kernel command line active.
The bio-based virtio-blk driver does not help on slow devices such as spin hard disks. The reason is that the benefit of scheduling is larger than what the shortened bio path offers. Do not use the bio-based driver on slow devices.
QEMU now integrates with libiscsi. This allows
QEMU to access iSCSI resources directly and use them as virtual
machine block devices.
This feature does not require any host iSCSI initiator
configuration, as is needed for a libvirt iSCSI target based storage
pool setup. Instead it directly connects guest storage interfaces
to an iSCSI target LUN by means of the user space library libiscsi.
iSCSI-based disk devices can also be
specified in the libvirt XML configuration.
This feature is only available using the RAW image format, as the iSCSI protocol has some technical limitations.
The following is the QEMU command line interface for iSCSI connectivity.
The use of libiscsi based storage provisioning is not yet exposed by the virt-manager interface, but instead it would be configured by directly editing the guest xml. This new way of accessing iSCSI based storage is to be done at the command line.
tux >sudoqemu-system-x86_64 -machine accel=kvm \ -drive file=iscsi://192.168.100.1:3260/iqn.2016-08.com.example:314605ab-a88e-49af-b4eb-664808a3443b/0,\ format=raw,if=none,id=mydrive,cache=none \ -device ide-hd,bus=ide.0,unit=0,drive=mydrive ...
Here is an example snippet of guest domain xml which uses the protocol based iSCSI:
<devices>
...
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'>
<host name='example.com' port='3260'/>
</source>
<auth username='myuser'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
<target dev='vda' bus='virtio'/>
</disk>
</devices>Contrast that with an example which uses the host based iSCSI initiator which virt-manager sets up:
<devices>
...
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/disk/by-path/scsi-0:0:0:0'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
</devices>RADOS Block Devices (RBD) store data in a Ceph cluster. They allow snapshotting, replication, and data consistency. You can use an RBD from your KVM-managed VM Guests similarly you use other block devices.
Refer to SUSE Enterprise Storage documentation for more details.
This section describes QEMU options affecting the type of the emulated video card and the way VM Guest graphical output is displayed.
QEMU uses -vga to define a video card used to display
VM Guest graphical output. The -vga option understands
the following values:
none
Disables video cards on VM Guest (no video card is emulated). You can still access the running VM Guest via the serial console.
std
Emulates a standard VESA 2.0 VBE video card. Use it if you intend to use high display resolution on VM Guest.
cirrus
Emulates Cirrus Logic GD5446 video card. Good choice if you insist on high compatibility of the emulated video hardware. Most operating systems (even Windows 95) recognize this type of card.
For best video performance with the cirrus type,
use 16-bit color depth both on VM Guest and VM Host Server.
The following options affect the way VM Guest graphical output is displayed.
-display gtk
Display video output in a GTK window. This interface provides UI elements to configure and control the VM during runtime.
-display sdl
Display video output via SDL, usually in a separate graphics window. For more information, see the SDL documentation.
-spice option[,option[,...]]
Enables the spice remote desktop protocol.
-display vnc
Refer to Section 28.5, “Viewing a VM Guest with VNC” for more information.
-nographic
Disables QEMU's graphical output. The emulated serial port is redirected to the console.
After starting the virtual machine with -nographic,
press
Ctrl–A
H in the virtual console to view the list of other
useful shortcuts, for example, to toggle between the console and the
QEMU monitor.
tux > qemu-system-x86_64 -hda /images/sles_base.raw -nographic
C-a h print this help
C-a x exit emulator
C-a s save disk data back to file (if -snapshot)
C-a t toggle console timestamps
C-a b send break (magic sysrq)
C-a c switch between console and monitor
C-a C-a sends C-a
(pressed C-a c)
QEMU 2.3.1 monitor - type 'help' for more information
(qemu)-no-frame
Disables decorations for the QEMU window. Convenient for dedicated desktop work space.
-full-screen
Starts QEMU graphical output in full screen mode.
-no-quit
Disables the close button of the QEMU window and prevents it from being closed by force.
-alt-grab, -ctrl-grab
By default, the QEMU window releases the “captured” mouse
after pressing
Ctrl–Alt. You can change the key combination to either
Ctrl–Alt–Shift
(-alt-grab), or the right
Ctrl key (-ctrl-grab).
There are two ways to create USB devices usable by the VM Guest in KVM:
you can either emulate new USB devices inside a VM Guest, or assign an
existing host USB device to a VM Guest. To use USB devices in QEMU you
first need to enable the generic USB driver with the -usb
option. Then you can specify individual devices with the
-usbdevice option.
SUSE currently supports the following types of USB devices:
disk, host,
serial, braille,
net, mouse, and
tablet.
-usbdevice option #disk
Emulates a mass storage device based on file. The optional
format option is used rather than detecting the
format.
tux > qemu-system-x86_64 [...] -usbdevice
disk:format=raw:/virt/usb_disk.rawhost
Pass through the host device (identified by bus.addr).
serial
Serial converter to a host character device.
braille
Emulates a braille device using BrlAPI to display the braille output.
net
Emulates a network adapter that supports CDC Ethernet and RNDIS protocols.
mouse
Emulates a virtual USB mouse. This option overrides the default PS/2
mouse emulation. The following example shows the hardware status of a
mouse on VM Guest started with qemu-system-ARCH [...]
-usbdevice mouse:
tux >sudohwinfo --mouse 20: USB 00.0: 10503 USB Mouse [Created at usb.122] UDI: /org/freedesktop/Hal/devices/usb_device_627_1_1_if0 [...] Hardware Class: mouse Model: "Adomax QEMU USB Mouse" Hotplug: USB Vendor: usb 0x0627 "Adomax Technology Co., Ltd" Device: usb 0x0001 "QEMU USB Mouse" [...]
tablet
Emulates a pointer device that uses absolute coordinates (such as touchscreen). This option overrides the default PS/2 mouse emulation. The tablet device is useful if you are viewing VM Guest via the VNC protocol. See Section 28.5, “Viewing a VM Guest with VNC” for more information.
Use -chardev to create a new character device. The
option uses the following general syntax:
qemu-system-x86_64 [...] -chardev BACKEND_TYPE,id=ID_STRING
where BACKEND_TYPE can be one of
null, socket, udp,
msmouse, vc, file,
pipe, console,
serial, pty,
stdio, braille,
tty, or parport. All character
devices must have a unique identification string up to 127 characters long.
It is used to identify the device in other related directives. For the
complete description of all back-end's sub-options, see the manual page
(man 1 qemu). A brief description of the available
back-ends follows:
null
Creates an empty device that outputs no data and drops any data it receives.
stdio
Connects to QEMU's process standard input and standard output.
socket
Creates a two-way stream socket. If PATH is specified, a Unix socket is created:
tux >sudoqemu-system-x86_64 [...] -chardev \ socket,id=unix_socket1,path=/tmp/unix_socket1,server
The SERVER suboption specifies that the socket is a listening socket.
If PORT is specified, a TCP socket is created:
tux >sudoqemu-system-x86_64 [...] -chardev \ socket,id=tcp_socket1,host=localhost,port=7777,server,nowait
The command creates a local listening (server) TCP
socket on port 7777. QEMU will not block waiting for a client to
connect to the listening port (nowait).
udp
Sends all network traffic from VM Guest to a remote host over the UDP protocol.
tux >sudoqemu-system-x86_64 [...] \ -chardev udp,id=udp_fwd,host=mercury.example.com,port=7777
The command binds port 7777 on the remote host mercury.example.com and sends VM Guest network traffic there.
vc
Creates a new QEMU text console. You can optionally specify the dimensions of the virtual console:
tux >sudoqemu-system-x86_64 [...] -chardev vc,id=vc1,width=640,height=480 \ -mon chardev=vc1
The command creates a new virtual console called vc1
of the specified size, and connects the QEMU monitor to it.
file
Logs all traffic from VM Guest to a file on VM Host Server. The
path is required and will be created if it does not
exist.
tux >sudoqemu-system-x86_64 [...] \ -chardev file,id=qemu_log1,path=/var/log/qemu/guest1.log
By default QEMU creates a set of character devices for serial and parallel ports, and a special console for QEMU monitor. However, you can create your own character devices and use them for the mentioned purposes. The following options will help you:
-serial CHAR_DEV
Redirects the VM Guest's virtual serial port to a character device
CHAR_DEV on VM Host Server. By default, it is a
virtual console (vc) in graphical mode, and
stdio in non-graphical mode. The
-serial understands many sub-options. See the manual
page man 1 qemu for a complete list of them.
You can emulate up to 4 serial ports. Use -serial
none to disable all serial ports.
-parallel DEVICE
Redirects the VM Guest's parallel port to a
DEVICE. This option supports the same devices
as -serial.
With openSUSE
Leap as a VM Host Server, you can directly use the hardware parallel
port devices /dev/parportN where
N is the number of the port.
You can emulate up to 3 parallel ports. Use -parallel
none to disable all parallel ports.
-monitor CHAR_DEV
Redirects the QEMU monitor to a character device
CHAR_DEV on VM Host Server. This option supports
the same devices as -serial. By default, it is a
virtual console (vc) in a graphical mode, and
stdio in non-graphical mode.
For a complete list of available character devices back-ends, see the man
page (man 1 qemu).
Use the -netdev option in combination with
-device to define a specific type of networking and a
network interface card for your VM Guest. The syntax for the
-netdev option is
-netdev type[,prop[=value][,...]]
Currently, SUSE supports the following network types:
user, bridge, and
tap. For a complete list of -netdev
sub-options, see the manual page (man 1 qemu).
-netdev Sub-options #bridge
Uses a specified network helper to configure the TAP interface and attach it to a specified bridge. For more information, see Section 28.4.3, “Bridged Networking”.
user
Specifies user-mode networking. For more information, see Section 28.4.2, “User-Mode Networking”.
tap
Specifies bridged or routed networking. For more information, see Section 28.4.3, “Bridged Networking”.
Use -netdev together with the related
-device option to add a new emulated network card:
tux >sudoqemu-system-x86_64 [...] \ -netdev tap1,id=hostnet0 \ -device virtio-net-pci2,netdev=hostnet0,vlan=13,\ macaddr=00:16:35:AF:94:4B4,name=ncard1
Specifies the network device type. | |
Specifies the model of the network card. Use
Currently, SUSE supports the models
| |
Connects the network interface to VLAN number 1. You can specify your own number—it is mainly useful for identification purpose. If you omit this suboption, QEMU uses the default 0. | |
Specifies the Media Access Control (MAC) address for the network card. It is a unique identifier and you are advised to always specify it. If not, QEMU supplies its own default MAC address and creates a possible MAC address conflict within the related VLAN. |
The -netdev user option instructs QEMU to use
user-mode networking. This is the default if no networking mode is
selected. Therefore, these command lines are equivalent:
tux >sudoqemu-system-x86_64 -hda /images/sles_base.raw
tux >sudoqemu-system-x86_64 -hda /images/sles_base.raw -netdev user,id=hostnet0
This mode is useful if you want to allow the VM Guest to access the external network resources, such as the Internet. By default, no incoming traffic is permitted and therefore, the VM Guest is not visible to other machines on the network. No administrator privileges are required in this networking mode. The user-mode is also useful for doing a network boot on your VM Guest from a local directory on VM Host Server.
The VM Guest allocates an IP address from a virtual DHCP server. VM Host Server
(the DHCP server) is reachable at 10.0.2.2, while the IP address range for
allocation starts from 10.0.2.15. You can use ssh to
connect to VM Host Server at 10.0.2.2, and scp to copy files
back and forth.
This section shows several examples on how to set up user-mode networking with QEMU.
tux >sudoqemu-system-x86_64 [...] \ -netdev user1,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,vlan=12,name=user_net13,restrict=yes4
Specifies user-mode networking. | |
Connects to VLAN number 1. If omitted, defaults to 0. | |
Specifies a human-readable name of the network stack. Useful when identifying it in the QEMU monitor. | |
Isolates VM Guest. It then cannot communicate with VM Host Server and no network packets will be routed to the external network. |
tux >sudoqemu-system-x86_64 [...] \ -netdev user,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,net=10.2.0.0/81,host=10.2.0.62,\ dhcpstart=10.2.0.203,hostname=tux_kvm_guest4
Specifies the IP address of the network that VM Guest sees and optionally the netmask. Default is 10.0.2.0/8. | |
Specifies the VM Host Server IP address that VM Guest sees. Default is 10.0.2.2. | |
Specifies the first of the 16 IP addresses that the built-in DHCP server can assign to VM Guest. Default is 10.0.2.15. | |
Specifies the host name that the built-in DHCP server will assign to VM Guest. |
tux >sudoqemu-system-x86_64 [...] \ -netdev user,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,tftp=/images/tftp_dir1,\ bootfile=/images/boot/pxelinux.02
Activates a built-in TFTP (a file transfer protocol with the functionality of a very basic FTP) server. The files in the specified directory will be visible to a VM Guest as the root of a TFTP server. | |
Broadcasts the specified file as a BOOTP (a network protocol that
offers an IP address and a network location of a boot image, often used
in diskless workstations) file. When used together with
|
tux >sudoqemu-system-x86_64 [...] \ -netdev user,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,hostfwd=tcp::2222-:22
Forwards incoming TCP connections to the port 2222 on the host to the
port 22 (SSH) on VM Guest. If
sshd is running on VM Guest,
enter
tux > ssh qemu_host -p 2222
where qemu_host is the host name or IP address of the
host system, to get a SSH prompt
from VM Guest.
With the -netdev tap option, QEMU creates a network
bridge by connecting the host TAP network device to a specified VLAN of
VM Guest. Its network interface is then visible to the rest of the
network. This method does not work by default and needs to be explicitly
specified.
First, create a network bridge and add a VM Host Server physical network
interface (usually eth0) to it:
Start and select › .
Click and select from the drop-down box in the window. Click .
Choose whether you need a dynamically or statically assigned IP address, and fill the related network settings if applicable.
In the pane, select the Ethernet device to add to the bridge.
Click . When asked about adapting an already configured device, click .
Click to apply the changes. Check if the bridge is created:
tux > bridge link
2: eth0 state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 \
state forwarding priority 32 cost 100
Use the following example script to connect VM Guest to the newly created
bridge interface br0. Several commands in the script
are run via the sudo mechanism because they require
root privileges.
To manage a network bridge, you need to have the tunctl package installed.
#!/bin/bash bridge=br01 tap=$(sudo tunctl -u $(whoami) -b)2 sudo ip link set $tap up3 sleep 1s4 sudo ip link add name $bridge type bridge sudo ip link set $bridge up sudo ip link set $tap master $bridge5 qemu-system-x86_64 -machine accel=kvm -m 512 -hda /images/sles_base.raw \ -netdev tap,id=hostnet0 \ -device virtio-net-pci,netdev=hostnet0,vlan=0,macaddr=00:16:35:AF:94:4B,\ ifname=$tap6,script=no7,downscript=no sudo ip link set $tap nomaster8 sudo ip link set $tap down9 sudo tunctl -d $tap10
Name of the bridge device. | |
Prepare a new TAP device and assign it to the user who runs the script. TAP devices are virtual network devices often used for virtualization and emulation setups. | |
Bring up the newly created TAP network interface. | |
Make a 1-second pause to make sure the new TAP network interface is really up. | |
Add the new | |
The | |
Before | |
Deletes the TAP interface from a network bridge | |
Sets the state of the TAP device to | |
Deconfigures the TAP device. |
Another way to connect VM Guest to a network through a network bridge is
by means of the qemu-bridge-helper helper program. It
configures the TAP interface for you, and attaches it to the specified
bridge. The default helper executable is
/usr/lib/qemu-bridge-helper. The helper executable is
setuid root, which is only executable by the members of the virtualization
group (kvm). Therefore the
qemu-system-ARCH command itself does not need to be run
under root privileges.
The helper is automatically called when you specify a network bridge:
qemu-system-x86_64 [...] \ -netdev bridge,id=hostnet0,vlan=0,br=br0 \ -device virtio-net-pci,netdev=hostnet0
You can specify your own custom helper script that will take care of the
TAP device (de)configuration, with the
helper=/path/to/your/helper option:
qemu-system-x86_64 [...] \ -netdev bridge,id=hostnet0,vlan=0,br=br0,helper=/path/to/bridge-helper \ -device virtio-net-pci,netdev=hostnet0
To define access privileges to qemu-bridge-helper,
inspect the /etc/qemu/bridge.conf file. For example
the following directive
allow br0
allows the qemu-system-ARCH command to connect its
VM Guest to the network bridge br0.
By default QEMU uses a GTK (a cross-platform toolkit library) window to
display the graphical output of a VM Guest.
With the -vnc option specified, you can make QEMU
listen on a specified VNC display and redirect its graphical output to the
VNC session.
When working with QEMU's virtual machine via VNC session, it is useful to
work with the -usbdevice tablet option.
Moreover, if you need to use another keyboard layout than the default
en-us, specify it with the -k option.
The first suboption of -vnc must be a
display value. The -vnc option
understands the following display specifications:
host:display
Only connections from host on the display number
display will be accepted. The TCP port on which the
VNC session is then running is normally a 5900 +
display number. If you do not specify
host, connections will be accepted from any host.
unix:path
The VNC server listens for connections on Unix domain sockets. The
path option specifies the location of the related Unix
socket.
none
The VNC server functionality is initialized, but the server itself is not started. You can start the VNC server later with the QEMU monitor. For more information, see Chapter 29, Virtual Machine Administration Using QEMU Monitor.
Following the display value there may be one or more option flags separated by commas. Valid options are:
reverse
Connect to a listening VNC client via a reverse connection.
websocket
Opens an additional TCP listening port dedicated to VNC Websocket connections. By definition the Websocket port is 5700+display.
password
Require that password-based authentication is used for client connections.
tls
Require that clients use TLS when communicating with the VNC server.
x509=/path/to/certificate/dir
Valid if TLS is specified. Require that x509 credentials are used for negotiating the TLS session.
x509verify=/path/to/certificate/dir
Valid if TLS is specified. Require that x509 credentials are used for negotiating the TLS session.
sasl
Require that the client uses SASL to authenticate with the VNC server.
acl
Turn on access control lists for checking of the x509 client certificate and SASL party.
lossy
Enable lossy compression methods (gradient, JPEG, ...).
non-adaptive
Disable adaptive encodings. Adaptive encodings are enabled by default.
share=[allow-exclusive|force-shared|ignore]
Set display sharing policy.
For more details about the display options, see the qemu-doc man page.
An example VNC usage:
tux >qemu-system-x86_64 [...] -vnc :5 # (on the client:)wilber >vncviewer venus:5 &
The default VNC server setup does not use any form of authentication. In the previous example, any user can connect and view the QEMU VNC session from any host on the network.
There are several levels of security that you can apply to your VNC client/server connection. You can either protect your connection with a password, use x509 certificates, use SASL authentication, or even combine some authentication methods in one QEMU command.
See Section A.1, “Generating x509 Client/Server Certificates” for more information about the
x509 certificates generation. For more information about configuring x509
certificates on a VM Host Server and the client, see
Section 10.3.2, “Remote TLS/SSL Connection with x509 Certificate (qemu+tls or xen+tls)” and
Section 10.3.2.3, “Configuring the Client and Testing the Setup”.
The Remmina VNC viewer supports advanced authentication mechanisms.
Therefore, it will be used to view the graphical output of VM Guest in the
following examples. For this example, let us assume that the server x509
certificates ca-cert.pem,
server-cert.pem, and
server-key.pem are located in the
/etc/pki/qemu directory on the host.
The client certificates can be placed in any custom directory, as Remmina
asks for their path on the connection start-up.
qemu-system-x86_64 [...] -vnc :5,password -monitor stdio
Starts the VM Guest graphical output on VNC display number 5 (usually
port 5905). The password suboption initializes a simple
password-based authentication method. There is no password set by default
and you need to set one with the change vnc password
command in QEMU monitor:
QEMU 2.3.1 monitor - type 'help' for more information (qemu) change vnc password Password: ****
You need the -monitor stdio option here, because you
would not be able to manage the QEMU monitor without redirecting its
input/output.
The QEMU VNC server can use TLS encryption for the session and x509 certificates for authentication. The server asks the client for a certificate and validates it against the CA certificate. Use this authentication type if your company provides an internal certificate authority.
qemu-system-x86_64 [...] -vnc :5,tls,x509verify=/etc/pki/qemu
You can combine the password authentication with TLS encryption and x509 certificate authentication to create a two-layer authentication model for clients. Remember to set the password in the QEMU monitor after you run the following command:
qemu-system-x86_64 [...] -vnc :5,password,tls,x509verify=/etc/pki/qemu \ -monitor stdio
Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It integrates several authentication mechanisms, like PAM, Kerberos, LDAP and more. SASL keeps its own user database, so the connecting user accounts do not need to exist on VM Host Server.
For security reasons, you are advised to combine SASL authentication with TLS encryption and x509 certificates:
qemu-system-x86_64 [...] -vnc :5,tls,x509,sasl -monitor stdio
When QEMU is running, a monitor console is provided for performing interaction with the user. Using the commands available in the monitor console, it is possible to inspect the running operating system, change removable media, take screenshots or audio grabs and control other aspects of the virtual machine.
The following sections list selected useful QEMU monitor commands
and their purpose. To get the full list, enter help in
the QEMU monitor command line.
You can access the monitor console from QEMU window either by a keyboard shortcut—press Ctrl–Alt–2 (to return to QEMU, press Ctrl–Alt–1)—or alternatively by clicking in the QEMU GUI window, then . The most convenient way is to show the QEMU window tabs with › . Then you can easily switch between the guest screen, monitor screen, and the output of the serial and parallel console.
To get help while using the console, use help or
?. To get help for a specific command, use
help COMMAND.
To get information about the guest system, use
info. If used without any option, the list of possible
options is printed. Options determine which part of the system will be
analyzed:
info version
Shows the version of QEMU.
info commands
Lists available QMP commands.
info network
Shows the network state.
info chardev
Shows the character devices.
info block
Information about block devices, such as hard disks, floppy drives, or CD-ROMs.
info blockstats
Read and write statistics on block devices.
info registers
Shows the CPU registers.
info cpus
Shows information about available CPUs.
info history
Shows the command line history.
info irq
Shows the interrupt statistics.
info pic
Shows the i8259 (PIC) state.
info pci
Shows the PCI information.
info tlb
Shows virtual to physical memory mappings.
info mem
Shows the active virtual memory mappings.
info jit
Shows dynamic compiler information.
info kvm
Shows the KVM information.
info numa
Shows the NUMA information.
info usb
Shows the guest USB devices.
info usbhost
Shows the host USB devices.
info profile
Shows the profiling information.
info capture
Shows the capture (audio grab) information.
info snapshots
Shows the currently saved virtual machine snapshots.
info status
Shows the current virtual machine status.
info mice
Shows which guest mice are receiving events.
info vnc
Shows the VNC server status.
info name
Shows the current virtual machine name.
info uuid
Shows the current virtual machine UUID.
info usernet
Shows the user network stack connection states.
info migrate
Shows the migration status.
info balloon
Shows the balloon device information.
info qtree
Shows the device tree.
info qdm
Shows the qdev device model list.
info roms
Shows the ROMs.
info migrate_cache_size
Shows the current migration xbzrle (“Xor Based Zero Run Length Encoding”) cache size.
info migrate_capabilities
Shows the status of the various migration capabilities, such as xbzrle compression.
info mtree
Shows the VM Guest memory hierarchy.
info trace-events
Shows available trace-events and their status.
To change the VNC password, use the change vnc
password command and enter the new password:
(qemu) change vnc password Password: ******** (qemu)
To add a new disk while the guest is running (hotplug), use the
drive_add and device_add commands.
First define a new drive to be added as a device to bus 0:
(qemu) drive_add 0 if=none,file=/tmp/test.img,format=raw,if=disk1 OK
You can confirm your new device by querying the block subsystem:
(qemu) info block [...] disk1: removable=1 locked=0 tray-open=0 file=/tmp/test.img ro=0 drv=raw \ encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
After the new drive is defined, it needs to be connected to a device so
that the guest can see it. The typical device would be a
virtio-blk-pci or scsi-disk. To get
the full list of available driver values, run:
(qemu) device_add ? name "VGA", bus PCI name "usb-storage", bus usb-bus [...] name "virtio-blk-pci", bus virtio-bus
Now add the device
(qemu) device_add virtio-blk-pci,drive=disk1,id=myvirtio1
and confirm with
(qemu) info pci
[...]
Bus 0, device 4, function 0:
SCSI controller: PCI device 1af4:1001
IRQ 0.
BAR0: I/O at 0xffffffffffffffff [0x003e].
BAR1: 32 bit memory at 0xffffffffffffffff [0x00000ffe].
id "myvirtio1"
Devices added with the device_add command can be
removed from the guest with device_del. Enter
help device_del on the QEMU monitor command line
for more information.
To release the device or file connected to the removable media device,
use the eject DEVICE
command. Use the optional -f to force ejection.
To change removable media (like CD-ROMs), use the
change DEVICE command. The
name of the removable media can be determined using the info
block command:
(qemu)info blockide1-cd0: type=cdrom removable=1 locked=0 file=/dev/sr0 ro=1 drv=host_device(qemu)change ide1-cd0 /path/to/image
It is possible to use the monitor console to emulate keyboard and mouse
input if necessary. For example, if your graphical user interface
intercepts some key combinations at low level (such as Ctrl–Alt–F1
in X Window), you can still enter them using the sendkey
KEYS:
sendkey ctrl-alt-f1
To list the key names used in the KEYS option,
enter sendkey and press →|.
To control the mouse, the following commands can be used:
mouse_move DX dy [DZ]
Move the active mouse pointer to the specified coordinates dx, dy with the optional scroll axis dz.
mouse_button VAL
Change the state of the mouse buttons (1=left, 2=middle, 4=right).
mouse_set INDEX
Set which mouse device receives events. Device index numbers can be
obtained with the info mice command.
If the virtual machine was started with the -balloon
virtio option (the paravirtualized balloon device is therefore
enabled), you can change the available memory dynamically. For
more information about enabling the balloon device, see
Section 27.1, “Basic Installation with qemu-system-ARCH”.
To get information about the balloon device in the monitor console and to
determine whether the device is enabled, use the info
balloon command:
(qemu) info balloon
If the balloon device is enabled, use the balloon
MEMORY_IN_MB command to set the requested
amount of memory:
(qemu) balloon 400
To save the content of the virtual machine memory to a disk or console output, use the following commands:
memsaveADDRSIZEFILENAME
Saves virtual memory dump starting at ADDR of size SIZE to file FILENAME
pmemsaveADDRSIZEFILENAME
Saves physical memory dump starting at ADDR of size SIZE to file FILENAME-
Makes a virtual memory dump starting at address
ADDR and formatted according to the
FMT string. The
FMT string consists of three parameters
COUNTFORMATSIZE:
The COUNT parameter is the number of items to be dumped.
The FORMAT can be x
(hex), d (signed decimal), u
(unsigned decimal), o (octal), c
(char) or i (assembly instruction).
The SIZE parameter can be
b (8 bits), h (16 bits),
w (32 bits) or g (64 bits). On
x86, h or w can be specified
with the i format to respectively select 16 or
32-bit code instruction size.
Makes a physical memory dump starting at address
ADDR and formatted according to the
FMT string. The
FMT string consists of three parameters
COUNTFORMATSIZE:
The COUNT parameter is the number of the items to be dumped.
The FORMAT can be x
(hex), d (signed decimal), u
(unsigned decimal), o (octal), c
(char) or i (asm instruction).
The SIZE parameter can be
b (8 bits), h (16 bits),
w (32 bits) or g (64 bits). On
x86, h or w can be specified
with thei format to respectively select 16 or
32-bit code instruction size.
Managing snapshots in QEMU monitor is not officially supported by SUSE yet. The information found in this section may be helpful in specific cases.
Virtual Machine snapshots are snapshots of the complete
virtual machine including the state of CPU, RAM, and the content of all
writable disks. To use virtual machine snapshots, you must have at least
one non-removable and writable block device using the
qcow2 disk image format.
Snapshots are helpful when you need to save your virtual machine in a particular state. For example, after you have configured network services on a virtualized server and want to quickly start the virtual machine in the same state that was saved last. You can also create a snapshot after the virtual machine has been powered off to create a backup state before you try something experimental and possibly make VM Guest unstable. This section introduces the former case, while the latter is described in Section 27.2.3, “Managing Snapshots of Virtual Machines with qemu-img”.
The following commands are available for managing snapshots in QEMU monitor:
savevmNAME
Creates a new virtual machine snapshot under the tag NAME or replaces an existing snapshot.
loadvmNAME
Loads a virtual machine snapshot tagged NAME.
delvm
Deletes a virtual machine snapshot.
info snapshots
Prints information about available snapshots.
(qemu) info snapshots Snapshot list: ID1 TAG2 VM SIZE3 DATE4 VM CLOCK5 1 booting 4.4M 2013-11-22 10:51:10 00:00:20.476 2 booted 184M 2013-11-22 10:53:03 00:02:05.394 3 logged_in 273M 2013-11-22 11:00:25 00:04:34.843 4 ff_and_term_running 372M 2013-11-22 11:12:27 00:08:44.965
Unique identification number of the snapshot. Usually auto-incremented. | |
Unique description string of the snapshot. It is meant as a human readable version of the ID. | |
The disk space occupied by the snapshot. Note that the more memory is consumed by running applications, the bigger the snapshot is. | |
Time and date the snapshot was created. | |
The current state of the virtual machine's clock. |
The following commands are available for suspending and resuming virtual machines:
stop
Suspends the execution of the virtual machine.
cont
Resumes the execution of the virtual machine.
system_reset
Resets the virtual machine. The effect is similar to the reset button on a physical machine. This may leave the file system in an unclean state.
system_powerdown
Sends an ACPI shutdown request to the machine. The effect is similar to the power button on a physical machine.
q or quit
Terminates QEMU immediately.
The live migration process allows to transmit any virtual machine from one host system to another host system without any interruption in availability. It is possible to change hosts permanently or only during maintenance.
The requirements for live migration:
All requirements from Section 9.7.1, “Migration Requirements” are applicable.
Live migration is only possible between VM Host Servers with the same CPU features.
AHCI interface,
VirtFS feature, and the
-mem-path command line option are not compatible with
migration.
The guest on the source and destination hosts must be started in the same way.
-snapshot qemu command line option should not be used
for migration (and this qemu command line option is
not supported).
The postcopy mode is not yet supported in
openSUSE Leap. It is released as a technology preview only. For
more information about postcopy, see http://wiki.qemu.org/Features/PostCopyLiveMigration.
More recommendations can be found at the following Web site: http://www.linux-kvm.org/page/Migration
The live migration process has the following steps:
The virtual machine instance is running on the source host.
The virtual machine is started on the destination host in the frozen
listening mode. The parameters used are the same as on the source host
plus the -incoming
tcp:IP:PORT
parameter, where IP specifies the IP address
and PORT specifies the port for listening to
the incoming migration. If 0 is set as IP address, the virtual machine
listens on all interfaces.
On the source host, switch to the monitor console and use the
migrate -d tcp:
DESTINATION_IP:PORT
command to initiate the migration.
To determine the state of the migration, use the info
migrate command in the monitor console on the source host.
To cancel the migration, use the migrate_cancel
command in the monitor console on the source host.
To set the maximum tolerable downtime for migration in seconds, use the
migrate_set_downtime
NUMBER_OF_SECONDS command.
To set the maximum speed for migration in bytes per second, use the
migrate_set_speed
BYTES_PER_SECOND command.
QMP is a JSON-based protocol that allows applications—such as
libvirt—to communicate with a running QEMU instance.
There are several ways you can access the QEMU monitor using QMP
commands.
The most flexible way to use QMP is by specifying the
-mon option. The following example creates a QMP
instance using standard input/output. Note that in the following
examples, -> marks lines with commands sent
from client to the running QEMU instance, while
<- marks lines with the output returned from
QEMU.
tux >sudoqemu-system-x86_64 [...] \ -chardev stdio,id=mon0 \ -mon chardev=mon0,mode=control,pretty=on <- { "QMP": { "version": { "qemu": { "micro": 0, "minor": 0, "major": 2 }, "package": "" }, "capabilities": [ ] } }
When a new QMP connection is established, QMP sends its greeting message
and enters capabilities negotiation mode. In this mode, only the
qmp_capabilities command works. To exit capabilities
negotiation mode and enter command mode, the
qmp_capabilities command must be issued first:
-> { "execute": "qmp_capabilities" }
<- {
"return": {
}
}
Note that "return": {} is a QMP's success response.
QMP's commands can have arguments. For example to eject a CD-ROM drive, enter the following:
->{ "execute": "eject", "arguments": { "device": "ide1-cd0" } }
<- {
"timestamp": {
"seconds": 1410353381,
"microseconds": 763480
},
"event": "DEVICE_TRAY_MOVED",
"data": {
"device": "ide1-cd0",
"tray-open": true
}
}
{
"return": {
}
}Instead of the standard input/output, you can connect the QMP interface to a network socket and communicate with it via a specified port:
tux >sudoqemu-system-x86_64 [...] \ -chardev socket,id=mon0,host=localhost,port=4444,server,nowait \ -mon chardev=mon0,mode=control,pretty=on
And then run telnet to connect to port 4444:
tux > telnet localhost 4444
Trying ::1...
Connected to localhost.
Escape character is '^]'.
<- {
"QMP": {
"version": {
"qemu": {
"micro": 0,
"minor": 0,
"major": 2
},
"package": ""
},
"capabilities": [
]
}
}You can create several monitor interfaces at the same time. The following example creates one HMP instance—human monitor which understands 'normal' QEMU monitor's commands—on the standard input/output, and one QMP instance on localhost port 4444:
tux >sudoqemu-system-x86_64 [...] \ -chardev stdio,id=mon0 -mon chardev=mon0,mode=readline \ -chardev socket,id=mon1,host=localhost,port=4444,server,nowait \ -mon chardev=mon1,mode=control,pretty=on
Invoke QEMU using the -qmp option, and create a
unix socket:
tux >sudoqemu-system-x86_64 [...] \ -qmp unix:/tmp/qmp-sock,server --monitor stdio QEMU waiting for connection on: unix:./qmp-sock,server
To communicate with the QEMU instance via the
/tmp/qmp-sock socket, use nc (see
man 1 nc for more information) from another terminal
on the same host:
tux >sudonc -U /tmp/qmp-sock <- {"QMP": {"version": {"qemu": {"micro": 0, "minor": 0, "major": 2} [...]
libvirt's virsh Command #
If you run your virtual machines under libvirt (see
Part II, “Managing Virtual Machines with libvirt”), you can communicate with its
running guests by running the virsh
qemu-monitor-command:
tux >sudovirsh qemu-monitor-command vm_guest1 \ --pretty '{"execute":"query-kvm"}' <- { "return": { "enabled": true, "present": true }, "id": "libvirt-8" }
In the above example, we ran the simple command
query-kvm which checks if the host is capable of
running KVM and if KVM is enabled.
To use the standard human-readable output format of QEMU
instead of the JSON format, use the --hmp
option:
tux >sudovirsh qemu-monitor-command vm_guest1 --hmp "query-kvm"
libvirt-lxcSince openSUSE Leap, LXC is integrated into libvirt library. This decision has several advantages over using LXC as a separate solution—such as a unified approach with other virtualization solutions or independence on the kernel used. This chapter describes steps needed to migrate an existing LXC en…
A container is a kind of “virtual machine” that can be started, stopped, frozen, or cloned (to name but a few tasks). To set up an LXC container, you first need to create a root file system containing the guest distribution:
There is currently no GUI to create a root file system. You will thus need
to open a terminal and use zypper as user root to
populate the new root file system. In the following steps, the new
root file system will be created in
/PATH/TO/ROOTFS.
Add the openSUSE Leap repository and the corresponding update repository
to the new root file system:
root #zypper --root /PATH/TO/ROOTFS ar http://download.opensuse.org/distribution/leap/42.3/repo/oss/ OSSroot #zypper --root /PATH/TO/ROOTFS ar http://download.opensuse.org/update/leap/42.3/oss/ Update-OSS
Refresh the repositories:
root # zypper --root /PATH/TO/ROOTFS refInstall a minimal system:
root # zypper --root /PATH/TO/ROOTFS in -t pattern minimal_base
Set the root password:
root #echo "ttyS0" >>/PATH/TO/ROOTFS/etc/securettyroot #echo "root:YOURPASSWD" | chpasswd -R /PATH/TO/ROOTFS
Start Virtual Machine Manager.
(Optional) If not already present, add a local LXC connection by clicking › .
Select as the hypervisor and click .
Select the connection and click menu.
Activate and click .
Type the path to the root file system from Procedure 30.1, “Creating a Root File System” and click the button.
Choose the maximum amount of memory and CPUs to allocate to the container. Then click the button.
Type in a name for the container. This name will be used for all
virsh commands on the container.
Click . Select the network to connect the container to and click the button: the container will then be created and started. A console will also be automatically opened.
Network devices and hostdev devices with network capabilities can be provided with one or more IP addresses to set on the network device in the guest. However, some hypervisors or network device types will simply ignore them or only use the first one.
Edit the container XML configuration using virsh:
tux > virsh -c lxc:/// edit MYCONTAINERThe following example shows how to set one ore multiple IP addresses:
[...] <devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <ip address='192.168.122.5' prefix='24'/> <ip address='192.168.122.5' prefix='24' peer1='10.0.0.10'/> <route family2='ipv4' address3='192.168.122.0' prefix4='24' gateway5='192.168.122.1'/> <route family2='ipv4' address3='192.168.122.8' gateway5='192.168.122.1'/> </interface> [...] <hostdev mode='capabilities' type='net'> <source> <interface>eth0</interface> </source> <ip address='192.168.122.6' prefix='24'/> <route family='ipv4' address='192.168.122.0' prefix='24' gateway='192.168.122.1'/> <route family='ipv4' address='192.168.122.8' gateway='192.168.122.1'/> </hostdev> </devices> [...]
Optional attribute. Holds the IP address of the other end of a point-to-point network device. | |
Can be set to either | |
Contains the IP address. | |
Optional parameter (will be automatically set if not specified). Defines the
number of 1 bits in the netmask. For IPv4, the default prefix is determined
according to the network “class” ( | |
If you do not specify a default gateway in the XML file, none will be set. |
You can also add route elements to define IP routes to add in the guest. This is used by the LXC driver.
[...] <devices> <interface type1='ethernet'> <source/>2 <ip address3='192.168.123.1' prefix='24'/> <ip address4='10.0.0.10' prefix='24' peer='192.168.122.5'/> <route5 family='ipv4' address='192.168.42.0' prefix='24' gateway='192.168.123.4'/> <source/> [...] </interface> [...] </devices> [...]
Network devices of type
These are configured as subelements of the | |
First IP address for the network device of type | |
Second IP address for the network device of type | |
Route to set on the host side of the network device. |
Find further details about the attributes of this element at http://libvirt.org/formatnetwork.html#elementsStaticroute.
Save the changes and exit the editor.
To configure the container network, edit the
/etc/sysconfig/network/ifcfg-* files.
Libvirt also allows to run single applications instead of full blown
Linux distributions in containers. In this example,
bash will be started in its own container.
Start Virtual Machine Manager.
(Optional) If not already present, add a local LXC connection by clicking › .
Select as the hypervisor and click .
Select the connection and click menu.
Activate and click .
Set the path to the application to be launched. As an example, the
field is filled with /bin/sh, which is fine to
create a first container. Click .
Choose the maximum amount of memory and CPUs to allocate to the container. Click .
Type in a name for the container. This name will be used for all
virsh commands on the container.
Click . Select the network to connect the container to and click . The container will be created and started. A console will be opened automatically.
Note that the container will be destroyed after the application has finished running.
By default, containers are not secured using AppArmor or SELinux. There
is no graphical user interface to change the security model for a libvirt
domain, but virsh will help.
Edit the container XML configuration using virsh:
tux > virsh -c lxc:/// edit MYCONTAINERAdd the following to the XML configuration, save it and exit the editor.
<domain>
...
<seclabel type="dynamic" model="apparmor"/>
...
</domain>
With this configuration, an AppArmor profile for the container will be
created in the /etc/apparmor.d/libvirt
directory. The default profile only allows the minimum applications to
run in the container. This can be changed by modifying the
libvirt-CONTAINER-uuid
file: this file is not overwritten by libvirt.
openSUSE versions prior to Leap were shipping LXC, while openSUSE Leap comes with the libvirt LXC driver, sometimes named libvirt-lxc to avoid confusion. The containers are not managed or configured in the same way in these tools. Here is a non-exhaustive list of differences.
The main difference is that domain configuration in libvirt is an XML file, while LXC configuration is a properties file. Most of the LXC properties can be mapped to the domain XML. The properties that cannot be migrated are:
lxc.network.script.up: this script can be
implemented using the /etc/libvirt/hooks/network
libvirt hook, though the script will need to be adapted.
lxc.network.ipv*: libvirt cannot set the container network configuration from the domain configuration.
lxc.network.name: libvirt cannot set the container network card name.
lxc.devttydir: libvirt does not allow changing the location of the console devices.
lxc.console: there is currently no way to log the output of the console into a file on the host for libvirt LXC containers.
lxc.pivotdir: libvirt does not allow to fine-tune
the directory used for the pivot_root.
/.olroot is used.
lxc.rootfs.mount: libvirt does not allow to fine-tune this.
LXC VLAN networks automatically create the VLAN interface on the host and then move it into the guest namespace. libvirt-lxc configuration can mention a VLAN tag ID only for Open vSwitch tap devices or PCI pass-through of SR-IOV VF. The conversion tool actually needs the user to manually create the VLAN interface on the host side.
LXC rootfs can also be an image file, but LXC brute-forces the mount to try to detect the proper file system format. libvirt-lxc can mount image files of several formats, but the 'auto' value for the format parameter is explicitly not supported. This means that the generated configuration will need to be tweaked by the user to get a proper match in that case.
LXC can support any cgroup configuration, even future ones, while libvirt domain configuration, needs to map each of them.
LXC can mount block devices in the rootfs, but it cannot mount raw partition files: the file needs to be manually attached to a loop device. On the other hand libvirt-lxc can mount block devices, but also partition files of any format.
Like Docker, libvirt allows you to inherit the namespace from containers or processes to share the network namespace. The following example shows how to share required namespaces.
<domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0'> [...] <lxc:namespace> <lxc:sharenet type='netns' value='red'/> <lxc:shareuts type='name' value='CONTAINER_1'/> <lxc:shareipc type='pid' value='12345'/> </lxc:namespace> </domain>
The netns option is specific to sharenet.
Use it to use an existing network namespace (instead of creating a
new network namespace for the container). In this case, the
privnet option will be ignored.
libvirt-lxc #
Since openSUSE
Leap, LXC is integrated into libvirt library. This decision has
several advantages over using LXC as a separate solution—such as a
unified approach with other virtualization solutions or independence on the
kernel used. This chapter describes steps needed to migrate an existing LXC
environment for use with the libvirt library.
The migration itself has two phases. You first need to migrate the host,
then the LXC containers. After that, you can run the original containers
as VM Guests in the libvirt environment.
Upgrade the host to openSUSE Leap 12 using the official DVD media.
After the upgrade, install the
libvirt-daemon-lxc and
libvirt-daemon-config-network packages.
Create a libvirt XML configuration
lxc_container.xml from the existing container
lxc_container:
tux >sudovirt-lxc-convert /etc/lxc/lxc_container/config > lxc_container.xml
Check if the network configuration on the host is the same as in the container configuration file, and fix it if needed.
Check the lxc_container.xml file for any weird or
missing configuration. Note that some LXC configuration options
cannot be mapped to libvirt configuration. Although the conversion
should usually be fine, check Section 30.4, “Differences Between the libvirt LXC Driver and LXC” for more
details.
Define the container in libvirt based on the created XML
definition:
tux >sudovirsh -c lxc:/// define lxc_container.xml
After the host is migrated, the LXC container in libvirt will not
boot. It needs to be migrated to openSUSE Leap as well to get
everything working.
The baseproduct file is missing (and
zypper keeps complaining about it). Create the
relevant symbolic link:
root #ROOTFS=/var/lib/lxc/lxc_container/rootfsroot #ln -s $ROOTFS/etc/products.d/SUSE_SLES.prod $ROOTFS/etc/products.d/baseproduct
Add the DVD repository. Note that you need to replace the DVD device with the one attached to your container:
root # zypper --root $ROOTFS ar \
cd:///?devices=/dev/dvd SLES12-12Disable or remove previous repositories:
root #zypper --root $ROOTFS lr | Alias | Name | Enabled | Refresh --+-----------------------------+------------------------------+---------+-------- 1 | SLES12-12 | SLES12-12 | Yes | No 2 | SUSE-[...]-Server-11-SP3 38 | SUSE-[...]-Server-11-SP3 138 | Yes | Noroot #zypper --root $ROOTFS rr 2
Upgrade the container:
root # zypper --root $ROOTFS dupInstall the Minimal pattern to make sure everything required is installed:
root # zypper --root $ROOTFS in -t pattern MinimalAfter the host and container migration is complete, the container can be started:
root # virsh -c lxc:/// start lxc_containerIf you need to get a console to view the logging messages produced by the container, run:
root # virsh -c lxc:/// console lxc_containerA software program that provides a graphical user interface for creating and managing virtual machines.
A guest operating system or application running on a virtual machine.
A virtualized PC environment (VM) capable of hosting a guest operating system and associated applications. Could be also called a VM Guest.
Virtualization Host Server
The physical computer running a SUSE virtualization platform software. The virtualization environment consists of the hypervisor, the host environment, virtual machines, and associated tools, commands, and configuration files. Other commonly used terms include host, Host Computer, Host Machine (HM), Virtual Server (VS), Virtual Machine Host (VMH), and VM Host Server (VHS).
A set of commands for Xen that lets administrators manage virtual
machines from a command prompt on the host computer. It replaced the
deprecated xm tool stack.
Intel* and AMD* provide virtualization hardware-assisted technology. This reduces the frequency of VM IN/OUT (fewer VM traps), because software is a major source of overhead, and increases the efficiency (the execution is done by the hardware). Moreover, this reduces the memory footprint, provides better resource control, and allows secure assignment of specific I/O devices.
The term is used in Xen environments, and refers to a virtual machine. The host operating system is actually a virtual machine running in a privileged domain and can be called Dom0. All other virtual machines on the host run in unprivileged domains and can be called domain U's.
A software program available in YaST and Virtual Machine Manager
that provides a graphical interface to guide you through the steps to
create virtual machines. It can also be run in text mode by entering
virt-install at a command prompt in the host
environment.
The desktop or command line environment that allows interaction with the host computer's environment. It provides a command line environment and can also include a graphical desktop, such as GNOME or IceWM. The host environment runs as a special type of virtual machine that has privileges to control and manage other virtual machines. Other commonly used terms include Dom0, privileged domain, and host operating system.
The software that coordinates the low-level interaction between virtual machines and the underlying physical computer hardware.
The video output device that drives a video display from a memory buffer containing a complete frame of data for virtual machine displays running in paravirtual mode.
VirtFS is a new paravirtualized file system interface designed for improving pass-through technologies in the KVM environment. It is based on the VirtIO framework.
Virtual CPU capping allows you to set vCPU capacity to 1–100 percent of the physical CPU capacity.
Virtual CPU over-commitment is the ability to assign more virtual CPUs to VMs than the actual number of physical CPUs present in the physical system. This procedure does not increase the overall performance of the system, but might be useful for testing purposes.
CPU hotplugging is used to describe the functions of replacing/adding/removing a CPU without shutting down the system.
Processor affinity, or CPU pinning enables the binding and unbinding of a process or a thread to a central processing unit (CPU) or a range of CPUs.
A type of network connection that lets a virtual machine be identified on an external network as a unique identity that is separate from and unrelated to its host computer.
A type of network bridge that has no physical network device or virtual network device provided by the host. This lets virtual machines communicate with other virtual machines on the same host but not with the host or on an external network.
The network outside a host's internal network environment.
A type of network configuration that restricts virtual machines to their host environment.
A type of network bridge that has a virtual network device but no physical network device provided by the host. This lets virtual machines communicate with the host and other virtual machines on the host. Virtual machines can communicate on an external network through the host.
A type of network connection that lets a virtual machine use the IP address and MAC address of the host.
A type of network bridge that has a physical network device but no virtual network device provided by the host. This lets virtual machines communicate on an external network but not with the host. This lets you separate virtual machine network communications from the host environment.
A type of network bridge that has both a physical network device and a virtual network device provided by the host.
The Advanced Host Controller Interface (AHCI) is a technical standard defined by Intel* that specifies the operation of Serial ATA (SATA) host bus adapters in a non-implementation-specific manner.
Data storage devices, such as CD-ROM drives or disk drives, that move data in the form of blocks. Partitions and volumes are also considered block devices.
A virtual disk based on a file, also called a disk image file.
A method of accessing data on a disk at the individual byte level instead of through its file system.
A disk image file that does not reserve its entire amount of disk space but expands as data is written to it.
The drive designation given to the first virtual disk on a paravirtual machine.
Kernel Control Groups (commonly called “cgroups”) are a kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchical organized groups to isolate resources.
See also Chapter 9, Kernel Control Groups.
A change root (chroot, or change root jail) is a
section in the file system that is isolated from the rest of the file
system. For this purpose, the chroot or
pivot_root command is used to change the root of the
file system. A program that is executed in such a “chroot
jail” cannot access files outside the designated directory tree.
Can be seen as a kind of “virtual machine” on the host server that can run any Linux system, for example openSUSE, SUSE Linux Enterprise Desktop, or SUSE Linux Enterprise Server. The main difference with a normal virtual machine is that the container shares its kernel with the host it runs on.
A kernel feature to isolate some resources like network, users, and others for a group of processes.
Advanced Configuration and Power Interface (ACPI) specification provides an open standard for device configuration and power management by the operating system.
Advanced Error Reporting
AER is a capability provided by the PCI Express specification which allows for reporting of PCI errors and recovery from some of them.
Advanced Programmable Interrupt Controller (APIC) is a family of interrupt controllers.
Bus:Device:Function
Notation used to succinctly describe PCI and PCIe devices.
Control Groups
Feature to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.).
Earliest Deadline First
This scheduler provides weighted CPU sharing in an intuitive way and uses real-time algorithms to ensure time guarantees.
Extended Page Tables
Performance in a virtualized environment is close to that in a native environment. Virtualization does create some overheads, however. These come from the virtualization of the CPU, the MMU, and the I/O devices. In some recent x86 processors AMD and Intel have begun to provide hardware extensions to help bridge this performance gap. In 2006, both vendors introduced their first generation hardware support for x86 virtualization with AMD-Virtualization (AMD-V) and Intel® VT-x technologies. Recently Intel introduced its second generation of hardware support that incorporates MMU-virtualization, called Extended Page Tables (EPT). EPT-enabled systems can improve performance compared to using shadow paging for MMU virtualization. EPT increases memory access latencies for a few workloads. This cost can be reduced by effectively using large pages in the guest and the hypervisor.
Flux Advanced Security Kernel
Xen implements a type of mandatory access control via a security architecture called FLASK using a module of the same name.
High Assurance Platform
HAP combines hardware and software technologies to improve workstation and network security.
Hardware Virtual Machine (commonly called like this by Xen).
Input/Output Memory Management Unit
IOMMU (AMD* technology) is a memory management unit (MMU) that connects a direct memory access-capable (DMA-capable) I/O bus to the main memory.
Kernel Same Page Merging
KSM allows for automatic sharing of identical memory pages between guests to save host memory. KVM is optimized to use KSM if enabled on the VM Host Server.
Memory Management Unit
is a computer hardware component responsible for handling accesses to memory requested by the CPU. Its functions include translation of virtual addresses to physical addresses (that is, virtual memory management), memory protection, cache control, bus arbitration and in simpler computer architectures (especially 8-bit systems) bank switching.
Physical Address Extension
32-bit x86 operating systems use Physical Address Extension (PAE) mode to enable addressing of more than 4 GB of physical memory. In PAE mode, page table entries (PTEs) are 64 bits in size.
Process-context identifiers
These are a facility by which a logical processor may cache information for multiple linear-address spaces so that the processor may retain cached information when software switches to a different linear address space. INVPCID instruction is used for fine-grained TLB flush, which is benefit for kernel.
Peripheral Component Interconnect Express
PCIe was designed to replace older PCI, PCI-X and AGP bus standards. PCIe has numerous improvements including a higher maximum system bus throughput, a lower I/O pin count and smaller physical footprint. Moreover it also has a more detailed error detection and reporting mechanism (AER), and a native hotplug functionality. It is also backward compatible with PCI.
Page Size Extended
PSE refers to a feature of x86 processors that allows for pages larger than the traditional 4 KiB size. PSE-36 capability offers 4 more bits, in addition to the normal 10 bits, which are used inside a page directory entry pointing to a large page. This allows a large page to be located in 36-bit address space.
Page Table
A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the hardware (RAM).
QXL is a cirrus VGA framebuffer (8M) driver for virtualized environment.
Rapid Virtualization Indexing, Nested Page Tables
An AMD second generation hardware-assisted virtualization technology for the processor memory management unit (MMU).
Serial ATA
SATA is a computer bus interface that connects host bus adapters to mass storage devices such as hard disks and optical drives.
Sandboxed environment where only predetermined system calls are permitted for added protection against malicious behavior.
Supervisor Mode Execution Protection
This prevents the execution of user-mode pages by the Xen hypervisor, making many application-to-hypervisor exploits much harder.
Simple Protocol for Independent Computing Environments
An SXP file is a Xen Configuration File.
Tiny Code Generator
Instructions are emulated rather than executed by the CPU.
Transparent Huge Pages
This allows CPUs to address memory using pages larger than the default 4 KB. This helps reduce memory consumption and CPU cache usage. KVM is optimized to use THP (via madvise and opportunistic methods) if enabled on the VM Host Server.
Translation Lookaside Buffer
TLB is a cache that memory management hardware uses to improve virtual address translation speed. All current desktop, notebook, and server processors use a TLB to map virtual and physical address spaces, and it is nearly always present in any hardware that uses virtual memory.
A scheduling entity, containing each state for virtualized CPU.
Virtual Desktop Infrastructure
Since kernel v3.6; a new method of accessing PCI devices from user space called VFIO.
Virtualization Host Server
Virtual Machine Control Structure
VMX non-root operation and VMX transitions are controlled by a data structure called a virtual-machine control structure (VMCS). Access to the VMCS is managed through a component of processor state called the VMCS pointer (one per logical processor). The value of the VMCS pointer is the 64-bit address of the VMCS. The VMCS pointer is read and written using the instructions VMPTRST and VMPTRLD. The VMM configures a VMCS using the VMREAD, VMWRITE, and VMCLEAR instructions. A VMM could use a different VMCS for each virtual machine that it supports. For a virtual machine with multiple logical processors (virtual processors), the VMM could use a different VMCS for each virtual processor.
Virtual Machine Device Queue
Multi-queue network adapters exist which support multiple VMs at the hardware level, having separate packet queues associated to the different hosted VMs (by means of the IP addresses of the VMs).
Virtual Machine Monitor (Hypervisor)
When the processor encounters an instruction or event of interest to the Hypervisor (VMM), it exits from guest mode back to the VMM. The VMM emulates the instruction or other event, at a fraction of native speed, and then returns to guest mode. The transitions from guest mode to the VMM and back again are high-latency operations, during which guest execution is completely stalled.
VMM will run in VMX root operation and guest software will run in VMX non-root operation. Transitions between VMX root operation and VMX non-root operation are called VMX transitions.
Virtual Machine eXtensions
New support for software control of TLB (VPID improves TLB performance with small VMM development effort).
Virtualization Technology for Directed I/O
Component to establish end-to-end integrity for guests via Trusted Computing.
To be able to create x509 client and server certificates you
need to issue them by a Certificate Authority (CA). It is recommended to
set up an independent CA that only issues certificates for
libvirt.
Set up a CA as described in Section 17.2.1, “Creating a Root CA”.
Create a server and a client certificate as described in Section 17.2.4, “Creating or Revoking User Certificates”. The Common Name (CN) for the server certificate must be the fully qualified host name, while the Common Name for the client certificate can be freely chosen. For all other fields stick with the defaults suggested by YaST.
Export the client and server certificates to a temporary location (for
example, /tmp/x509/) by performing the following
steps:
Select the certificate on the tab.
Choose › › , provide the and the full path and the file name under
, for example,
/tmp/x509/server.pem or
/tmp/x509/client.pem.
Open a terminal and change to the directory where you have saved the certificate and issue the following commands to split it into certificate and key (this example splits the server key):
tux > csplit -z -f s_ server.pem '/-----BEGIN/' '{1}'
mv s_00 servercert.pem
mv s_01 serverkey.pemRepeat the procedure for each client and server certificate you want to export.
Finally export the CA certificate by performing the following steps:
Switch to the tab.
Choose › › and enter the full path and the file name under
, for example,
/tmp/x509/cacert.pem.
Since the early Xen 2.x releases, xend has been
the de facto toolstack for managing Xen installations. In Xen
4.1, a new toolstack called libxenlight (also known as libxl) was
introduced with technology preview status. libxl is a small, low-level
library written in C. It has been designed to provide a simple API for
all client toolstacks
(XAPI,
libvirt, xl). In Xen 4.2, libxl was promoted to officially
supported status and xend was marked deprecated.
xend has been included in the Xen 4.3 and 4.4
series to give users ample time to convert their tooling to libxl. It
has been removed from the upstream Xen project and will no longer be
provided starting with the Xen 4.5 series and openSUSE Leap
42.1.
. Starting with openSUSE Leap 42.1,
xend is no longer supported.
One of the major differences between xend and libxl is
that the former is stateful, while the latter is stateless. With
xend, all client applications such as
xm and libvirt see the same system state.
xend is responsible for maintaining state for the
entire Xen host. In libxl, client applications such as
xl or libvirt must maintain state. Thus domains
created with xl or not visible or known to other libxl
applications such as libvirt. Generally, it is discouraged to mix
and match libxl applications and is preferred that a single libxl
application be used to manage a Xen host. In SUSE Linux Enterprise 12 , we
recommend to use libvirt to manage Xen hosts. This allows
management of the Xen system through libvirt applications such
as virt-manager, virt-install,
virt-viewer,
libguestfs, etc. If xl is used to manage the Xen
host, any virtual machines under its management will not be accessible
to libvirt. Hence, they are not accessible to any of the libvirt
applications.
The xl application, along with its configuration
format (see man xl.cfg), was designed to be
backward-compatible with the xm application and its
configuration format (see man xm.cfg). Existing
xm configuration should be usable with
xl. Since libxl is stateless, and
xl does not support the notion of managed domains,
SUSE recommends using libvirt to manage SLES 12 Xen hosts.
SUSE has provided a tool called xen2libvirt, which
provides a simple mechanism to import domains previously managed by
xend into libvirt. See
Section B.2, “Import Xen Domain Configuration into libvirt” for more information on
xen2libvirt.
The basic structure of every xl command is:
xl subcommandOPTIONSDOMAIN
DOMAIN is the numeric domain id, or the domain name (which will be internally translated to domain id), and OPTIONS are subcommand specific options.
Although xl/libxl was designed to be backward-compatible with xm/xend, there are a few differences that should be noted:
Managed or persistent domains. libvirt now provides this
functionality.
xl/libxl does not support Python code in the domain configuration files.
xl/libxl does not support creating domains from SXP format
configuration files (xm create
-F).
xl/libxl does not support sharing storage across DomU's via
w! in domain configuration files.
xl/libxl is relatively new and under heavy development, hence a few features are still missing with regard to the xm/xend toolstack:
SCSI LUN/Host pass-through (PVSCSI)
USB pass-through (PVUSB)
Direct Kernel Boot for fully virtualized Linux guests for Xen
Before upgrading a SLES 11 SP3 Xen host to SLES 12:
You must remove any Python code from your xm domain configuration files.
It is recommended to capture the libvirt domain XML from all existing
virtual machines using virsh
dumpxml DOMAIN_NAME
DOMAIN_NAME.xml.
It is recommended to do a backup of
/etc/xen/xend-config.sxp and
/boot/grub/menu.lst files to keep references of
previous parameters used for Xen.
Currently, live migrating virtual machines running on a SLES 11 SP3
Xen host to a SLES 12 Xen host is not supported. The
xend and libxl toolstacks are not
runtime-compatible. Virtual machine downtime will be required to move
the virtual machines from SLES 11 SP3 to a SLES 12 host.
libvirt #
xen2libvirt is a command line tool to import legacy
Xen domain configuration into the libvirt virtualization library
(see The Virtualization book for more information on libvirt).
xen2libvirt provides an easy way to import domains managed by the
deprecated xm/xend tool stack into the new
libvirt/libxl tool stack. Several domains can be imported at once
using its --recursive mode
xen2libvirt is included in the
xen-tools package. If needed, install it with
tux >sudozypper install xen-tools
xen2libvirt general syntax is
xen2libvirt <options> /path/to/domain/config
where options can be:
-h, --help
Prints short information about xen2libvirt usage.
-c, --convert-only
Converts the domain configuration to the libvirt XML format, but
does not do the import to libvirt.
-r, --recursive
Converts and/or imports all domains configuration recursively, starting at the specified path.
-f, --format
Specifies the format of the source domain configuration. Can be either
xm, or sexpr (S-expression
format).
-v, --verbose
Prints more detailed information about the import process.
libvirt #
Suppose you have a Xen domain managed with xm
with the following configuration saved in
/etc/xen/sle12.xm:
kernel = "/boot/vmlinuz-2.6-xenU" memory = 128 name = "SLE12" root = "/dev/hda1 ro" disk = [ "file:/var/xen/sle12.img,hda1,w" ]
Convert it to libvirt XML without importing it, and look at its
content:
tux >sudoxen2libvirt -f xm -c /etc/xen/sle12.xm > /etc/libvirt/qemu/sles12.xml # cat /etc/libvirt/qemu/sles12.xml <domain type='xen'> <name>SLE12</name> <uuid>43e1863c-8116-469c-a253-83d8be09aa1d</uuid> <memory unit='KiB'>131072</memory> <currentMemory unit='KiB'>131072</currentMemory> <vcpu placement='static'>1</vcpu> <os> <type arch='x86_64' machine='xenpv'>linux</type> <kernel>/boot/vmlinuz-2.6-xenU</kernel> </os> <clock offset='utc' adjustment='reset'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <disk type='file' device='disk'> <driver name='file'/> <source file='/var/xen/sle12.img'/> <target dev='hda1' bus='xen'/> </disk> <console type='pty'> <target type='xen' port='0'/> </console> </devices> </domain>
To import the domain into libvirt, you can either run the same
xen2libvirt command without the -c
option, or use the exported file
/etc/libvirt/qemu/sles12.xml and define a new
Xen domain using virsh:
tux >sudovirsh define /etc/libvirt/qemu/sles12.xml
xm and xl Applications #
The purpose of this chapter is to list all differences between
xm and xl applications. Generally,
xl is designed to be compatible with
xm. Replacing xm with
xl in custom scripts or tools is usually sufficient.
You can also use the libvirt framework using the
virsh command. In this documentation only the first
OPTION for virsh will be
shown. To get more help on this option do a:
virshhelpOPTION
To easily understand the difference between xl and
xm commands, the following notation is used in this
section:
|
Notation |
Meaning |
|---|---|
|
(-) minus |
Option exists in |
|
(+) plus |
Option exists in |
|
Options |
Task |
|---|---|
|
(+) |
Verbose, increase the verbosity of the output |
|
(+) |
Dry run, do not actually execute the command |
|
(+) |
Force execution. |
List of common options of xl and
xm, and their libvirt equivalents.
|
Options |
Task |
|
|---|---|---|
|
destroy DOMAIN |
Immediately terminate the domain. |
|
|
domid DOMAIN_NAME |
Convert a domain name to a DOMAIN_ID. |
|
|
domname DOMAIN_ID |
Convert a DOMAIN_ID to a DOMAIN_NAME. |
|
|
help |
Display the short help message (that is, common commands). |
|
|
pause DOMAIN_ID |
Pause a domain. When in a paused state, the domain will still consume allocated resources such as memory, but will not be eligible for scheduling by the Xen hypervisor. |
|
|
unpause DOMAIN_ID |
Move a domain out of the paused state. This will allow a previously paused domain to be eligible for scheduling by the Xen hypervisor. |
|
|
rename DOMAIN_ID NEW_DOMAIN_NAME |
Change the domain name of DOMAIN_ID to NEW_DOMAIN_NAME. |
|
|
sysrq DOMAIN <letter> |
Send a Magic System Request to the domain, each type of request is represented by a different letter. It can be used to send SysRq requests to Linux guests, see https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html for more information. It requires PV drivers to be installed in your guest OS. |
|
|
vncviewer OPTIONS DOMAIN |
Attach to domain's VNC server, forking a
|
|
|
|
Enable the vcpu-count virtual CPUs for the domain in question. Like
|
|
|
vcpu-list DOMAIN_ID |
List VCPU information for a specific domain. If no domain is specified, VCPU information for all domains will be provided. |
|
|
vcpu-pin DOMAIN_ID <VCPU|all> <CPUs|all> |
Pin the VCPU to only run on the specific CPUs. The keyword all can be used to apply the CPU list to all VCPUs in the domain. |
|
|
|
Read the Xen message buffer, similar to dmesg on a Linux system. The buffer contains informational, warning, and error messages created during Xen's boot process. | |
|
|
Execute the |
|
|
|
Print the current uptime of the domains running. With the
| |
|
|
Send debug keys to Xen. It is the same as pressing the Xen conswitch (Ctrl-A by default) three times and then pressing "keys". | |
|
|
Move a domain specified by DOMAIN_ID or DOMAIN into a CPU_POOL. | |
|
|
Deactivate a cpu pool. This is possible only if no domain is active in the cpu-pool. | |
|
|
Detach a domain's virtual block device. devid
may be the symbolic name or the numeric device id given to the
device by Dom0. You will need to run |
|
|
|
Create a new network device in the domain specified by DOMAIN_ID. network-device describes the device to attach, using the same format as the vif string in the domain configuration file |
|
|
|
Hotplug a new pass-through PCI device to the specified domain. BDF is the PCI Bus/Device/Function of the physical device to be passed through. |
|
|
|
List pass-through PCI devices for a domain | |
|
|
Determine if the FLASK security module is loaded and enforcing its policy. | |
|
|
Enable or disable enforcing of the FLASK access controls. The default is permissive and can be changed using the flask_enforcing option on the hypervisor's command line. |
List of xm options which are no more
available with the XL tool stack and a replacement solution if available.
The list of Domain management removed command and their replacement.
|
Domain Management Removed Options | ||
|---|---|---|
|
Options |
Task |
Equivalent |
|
(-) |
Print the Xend log. |
This log file can be found in
|
|
(-) |
Remove a domain from Xend domain management. The
|
|
|
(-) |
Adds a domain to Xend domain management |
|
|
(-) |
Start a Xend managed domain that was added using the
|
|
|
(-) |
Dry run - prints the resulting configuration in SXP but does not create the domain |
|
|
(-) |
Reset a domain |
|
|
(-) |
Show domain state |
|
|
(-) |
Proxy Xend XMLRPC over stdio | |
|
(-) |
Moves a domain out of the suspended state and back into memory |
|
|
(-) |
Suspend a domain to a state file so that it can be later resumed
using the |
|
USB options are not available with xl/libxl tool stack.
virsh has the attach-device and
detach-device options but it does not work yet with
USB.
|
USB Devices Management Removed Options | |
|---|---|
|
Options |
Task |
|
(-) |
Add a new USB physical bus to a domain |
|
(-) |
Delete a USB physical bus from a domain |
|
(-) |
Attach a new USB physical bus to domain's virtual port |
|
(-) |
Detach a USB physical bus from domain's virtual port |
|
(-) |
List domain's attachment state of all virtual port |
|
(-) |
List all the assignable USB devices |
|
(-) |
Create a domain's new virtual USB host controller |
|
(-) |
Destroy a domain's virtual USB host controller |
CPU management options has changed. New options are available, see:
Section B.3.5.10, “xl cpupool-*”
|
CPU Management Removed Options | |
|---|---|
|
Options |
Task |
|
(-) |
Adds a CPU pool to Xend CPU pool management |
|
(-) |
Starts a Xend CPU pool |
|
(-) |
Removes a CPU pool from Xend management |
create #
xl create
CONFIG_FILE OPTIONS
VARS
libvirt Equivalent:
virsh create
xl create Changed Options #|
| |
|---|---|
|
Options |
Task |
|
(*) -f=FILE, --defconfig=FILE |
Use the given configuration file |
xm create Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Skip DTD checking - skips checks on XML before creating |
|
(-) |
XML dry run |
|
(-) |
Use the given SXP formatted configuration script |
|
(-) |
Search path for configuration scripts |
|
(-) |
Print the available configuration variables (vars) for the configuration script |
|
(-) |
Dry run — prints the configuration in SXP but does not create the domain |
|
(-) |
Connect to the console after the domain is created |
|
(-) |
Quiet mode |
|
(-) |
Leave the domain paused after it is created |
xl create Added Options #|
| |
|---|---|
|
Options |
Task |
|
(+) |
Attach to domain's VNC server, forking a vncviewer process |
|
(+) |
Pass VNC password to vncviewer via stdin |
console #
xl console
OPTIONS DOMAIN
libvirt Equivalent
virsh console
xl console Added Options #|
| |
|---|---|
|
Option |
Task |
|
(+) |
Connect to a PV console or connect to an emulated serial console. PV consoles are the only consoles available for PV domains while HVM domains can have both |
xl info
xm info Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Numa info |
|
(-) |
List Xend configuration parameters |
dump-core #
xl dump-core
DOMAIN FILENAME
libvirt Equivalent
virsh dump
xm dump-core Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Dump core without pausing the domain |
|
(-) |
Crash domain after dumping core |
|
(-) |
Reset domain after dumping core |
list #
xl list options
DOMAIN
libvirt Equivalent
virsh list --all
xm list Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
The output for |
|
(-) |
Output information for VMs in the specified state |
xl list Added Options #|
| |
|---|---|
|
Options |
Task |
|
(+) |
Also prints the security labels |
|
(+) |
Also prints the domain UUIDs, the shutdown reason and security labels |
mem-* #libvirt Equivalent
virsh setmem
virsh setmaxmem
xl mem-* Changed Options #|
| |
|---|---|
|
Options |
Task |
|
|
Appending |
|
|
Set the domain's used memory using the balloon driver |
migrate #
xl migrate
OPTIONS DOMAIN
HOST
libvirt Equivalent
virsh migrate --live hvm-sles11-qcow2 xen+
CONNECTOR://USER@IP_ADDRESS/
xm migrate Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Use live migration. This will migrate the domain between hosts without shutting down the domain |
|
(-) |
Set maximum Mbs allowed for migrating the domain |
|
(-) |
Change home server for managed domains |
|
(-)
|
Number of iterations before final suspend (default:30) |
|
(-)
|
Max amount of memory to transfer before final suspend (default: 3*RAM). |
|
(-)
|
Number of dirty pages before final suspend (default:50) |
|
(-) |
Abort migration instead of doing final suspend |
|
(-) |
Log progress of migration to |
|
(-) |
Use ssl connection for migration |
xl migrate Added Options #|
| |
|---|---|
|
Options |
Task |
|
(+) |
Use <sshcommand> instead of |
|
(+) |
On the new host, do not wait in the background (on <host>) for the death of the domain |
|
(+) |
Send <config> instead of a configuration file from creation |
xl reboot
OPTIONS DOMAIN
libvirt Equivalent
virsh reboot
xm reboot Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Reboot all domains |
|
(-) |
Wait for reboot to complete before returning. This may take a while, as all services in the domain need to be shut down cleanly |
xl reboot Added Options #|
| |
|---|---|
|
Option |
Task |
|
(+) |
Fallback to ACPI reset event for HVM guests with no PV drivers |
xl save
OPTIONS DOMAIN
CHECK_POINT_FILE
CONFIG_FILE
libvirt Equivalent
virsh save
xl save Added Options #|
| |
|---|---|
|
Option |
Task |
|
(+) |
Leave domain running after creating the snapshot |
xl restore
OPTIONS
CONFIG_FILE
CHECK_POINT_FILE
libvirt Equivalent
virsh restore
xl restore Added Options #|
| |
|---|---|
|
Options |
Task |
|
(+) |
Do not unpause domain after restoring it |
|
(+) |
Do not wait in the background for the death of the domain on the new host |
|
(+) |
Enable debug messages |
|
(+) |
Attach to domain's VNC server, forking a vncviewer process |
|
(+) |
Pass VNC password to vncviewer via stdin |
xl shutdown
OPTIONS DOMAIN
libvirt Equivalent
virsh shutdown
xm shutdown Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Wait for the domain to complete shutdown before returning |
|
(-) |
Shutdown all guest domains |
|
(-) | |
|
(-) | |
xl shutdown Added Options #|
| |
|---|---|
|
Option |
Task |
|
(+) |
If the guest does not support PV shutdown control then fallback to sending an ACPI power event |
xl trigger Changed Options #|
| |
|---|---|
|
Option |
Task |
|
|
Send a trigger to a domain. Only available for HVM domains |
xl sched-* #
xl sched-credit
OPTIONS
libvirt Equivalent
virsh schedinfo
xm sched-credit Removed Options #|
| |
|---|---|
|
Options |
Task |
|
|
Domain |
|
|
A domain with a weight of 512 will get twice as much CPU as a domain with a weight of 256 on a contended host. Legal weights range from 1 to 65535 and the default is 256 |
|
|
The CAP optionally fixes the maximum amount of CPU a domain can consume |
xl sched-credit Added Options #|
| |
|---|---|
|
Options |
Task |
|
(+) |
Restrict output to domains in the specified cpupool |
|
(+) |
Specify to list or set pool-wide scheduler parameters |
|
(+) |
Timeslice tells the scheduler how long to allow VMs to run before pre-empting |
|
(+) |
Ratelimit attempts to limit the number of schedules per second |
xl sched-credit2
OPTIONS
libvirt Status
virsh only supports credit scheduler, not credit2
scheduler
xm sched-credit2 Removed Options #|
| |
|---|---|
|
Options |
Task |
|
|
Domain |
|
|
Legal weights range from 1 to 65535 and the default is 256 |
xl sched-credit2 Added Options #|
| |
|---|---|
|
Option |
Task |
|
(+) |
Restrict output to domains in the specified cpupool |
xl sched-sedf
OPTIONS
xm sched-sedf Removed Options #|
| |
|---|---|
|
Options |
Task |
|
|
The normal EDF scheduling usage in milliseconds |
|
|
The normal EDF scheduling usage in milliseconds |
|
|
Scaled period if domain is doing heavy I/O |
|
|
Flag for allowing domain to run in extra time (0 or 1) |
|
|
Another way of setting CPU slice |
xl sched-sedf Added Options #|
| |
|---|---|
|
Options |
Task |
|
(+) |
Restrict output to domains in the specified cpupool |
|
(+) |
Domain |
xl cpupool-* #
xl cpupool-cpu-remove
CPU_POOL <CPU
nr>|node:<node nr>
xl cpupool-list [-c|--cpus]
CPU_POOL
xm cpupool-list Removed Options #|
| |
|---|---|
|
Option |
Task |
|
(-) |
Output all CPU pool details in SXP format |
xl cpupool-cpu-add
CPU_POOL cpu-nr|node:node-nr
xl cpupool-create
OPTIONS
CONFIG_FILE [Variable=Value ...]
xm cpupool-create Removed Options #|
| |
|---|---|
|
Options |
Task |
|
(-) |
Use the given Python configuration script. The configuration script is loaded after arguments have been processed |
|
(-) |
Dry run - prints the resulting configuration in SXP but does not create the CPU pool |
|
(-) |
Print the available configuration variables (vars) for the configuration script |
|
(-) |
Search path for configuration scripts. The value of PATH is a colon-separated directory list |
|
(-) |
CPU pool configuration to use (SXP) |
xl pci-detach [-f]
DOMAIN_ID <BDF>
libvirt Equivalent
virsh detach-device
xl pci-detach Added Options #|
| |
|---|---|
|
Option |
Task |
|
(+) |
If |
xm block-list Removed Options #|
| |
|---|---|
|
Option |
Task |
|
(-) |
List virtual block devices for a domain |
|
Option |
|
|---|---|
|
|
|
|
|
|
|
Option |
|
|---|---|
|
|
|
|
|
|
|
|
|
xl network-attach Removed Options #|
Removed Options | |
|---|---|
|
Option |
Task |
|
(-) | |
|
Options |
Task |
|---|---|
|
|
Update the saved configuration for a running domain. This has no immediate effect but will be applied when the guest is next restarted. This command is useful to ensure that runtime modifications made to the guest will be preserved when the guest is restarted |
|
| |
|
|
List count of shared pages.List specifically for that domain. Otherwise, list for all domains |
|
|
Prints information about guests. This list excludes information about service or auxiliary domains such as Dom0 and stubdoms |
|
|
Renames a cpu-pool to newname |
|
|
Splits up the machine into one cpu-pool per numa node |
|
cd-insert DOMAIN <VirtualDevice> <type:path> |
Insert a CD-ROM into a guest domain's existing virtual CD drive. The virtual drive must already exist but can be current empty |
|
|
Eject a CD-ROM from a guest's virtual CD drive. Only works with HVM domains |
|
|
List all the assignable PCI devices. These are devices in the system which are configured to be available for pass-through and are bound to a suitable PCI back-end driver in Dom0 rather than a real driver |
|
|
Make the device at PCI Bus/Device/Function BDF assignable to guests.This will bind the device to the pciback driver |
|
|
Make the device at PCI Bus/Device/Function BDF assignable to guests. This will at least unbind the device from pciback |
|
|
Load FLASK policy from the given policy file. The initial policy is provided to the hypervisor as a multiboot module; this command allows runtime updates to the policy. Loading new security policy will reset runtime changes to device labels |
For more information on Xen tool stacks refer to the following online resources:
xl commandXL command line.
xl.cfg domain configuration file syntax.
xl disk configuration option.
XL vs Xend feature comparison.
virsh command.
xm Compatible Format #
Although xl is now the current toolkit for managing
Xen guests (apart from the preferred libvirt), you may need to
export the guest configuration to the previously used
xm format. To do this, follow these steps:
First export the guest configuration to a file:
tux > virsh dumpxml guest_id > guest_cfg.xml
Then convert the configuration to the xm format:
tux > virsh domxml-to-native xen-xm guest_cfg.xml > guest_xm_cfgThis appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.
AutoYaST is a system for unattended mass deployment openSUSE Leap systems using an AutoYaST profile containing installation and configuration data. The manual guides you through the basic steps of auto-installation: preparation, installation, and configuration.
/etc/fstabadd_on_products.xml
Copyright © 2006– 2018 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
AutoYaST is a system for unattended mass deployment of openSUSE Leap systems. AutoYaST installations are performed using an AutoYaST control file (also called “profile”) with installation and configuration data. That control file can be created using the configuration interface of AutoYaST and can be provided to YaST during installation in different ways.
Standard installation of openSUSE Leap are based on a wizard workflow. This is user-friendly and efficient when installing few machines. However, it becomes repetitive and time-consuming when installing many machines.
To avoid that, you could do mass deployments by copying the hard disk of the first successful installation. Unfortunately, that leads to the issue that even minute configuration changes between each machine have to later be dealt with individually. For example, when using static IP addresses, these IP addresses would have to be reset for each machine.
A regular installation of openSUSE Leap is semi-automated by default. The user is prompted to select the necessary information at the beginning of the installation (usually language only). YaST then generates a proposal for the underlying system depending on different factors and system parameters. Usually—and especially for new systems—such a proposal can be used to install the system and provides a usable installation. The steps following the proposal are fully automated.
AutoYaST can be used where no user intervention is required or where customization is required. Using an AutoYaST control file, YaST prepares the system for a custom installation and does not interacts with the user, unless specified in the file controlling the installation.
AutoYaST is not an automated GUI system. This means that usually many screens will be skipped—you will never see the language selection interface, for example. AutoYaST will simply pass the language parameter to the sub-system without displaying any language related interface.
Using AutoYaST, multiple systems can easily be installed in parallel and
quickly. They need to share the same environment and similar, but not
necessarily identical, hardware. The installation is defined by an XML
configuration file (usually named autoinst.xml)
called the “AutoYaST control file”. It can initially be
created using existing configuration resources easily be tailored for
any specific environment.
AutoYaST is fully integrated and provides various options for installing and configuring a system. The main advantage over other auto-installation systems is the possibility to configure a computer by using existing modules and avoiding using custom scripts which are normally executed at the end of the installation.
This document will guide you through the three steps of auto-installation:
Preparation: All relevant information about the target system is collected and turned into the appropriate directives of the control file. The control file is transferred onto the target system where its directives will be parsed and fed into YaST.
Installation: YaST performs the installation of the basic system using the data from the AutoYaST control file.
Configuration: After the installation of the basic system, the system configuration is performed in the second stage of the installation. User defined post-installation scripts from the AutoYaST control file will also be executed at this stage.
A regular installation of openSUSE Leap 42.3 is performed in a single stage. The auto-installation process, however, is divided into two stages. After the installation of the basic system the system boots into the second stage where the system configuration is done.
The packages autoyast2 and
autoyast2-installation have to be installed to run
the second stage in the installed system correctly. Otherwise an error
will be shown before booting into the installed system.
The second stage can be turned off with the
second_stage parameter:
<general>
<mode>
<confirm config:type="boolean">false</confirm>
<second_stage config:type="boolean">false</second_stage>
</mode>
</general>The complete and detailed process is illustrated in the following figure:
The control file usually is a configuration description for a single system. It consists of sets of resources with properties including support for complex structures such as lists, records, trees and large embedded or referenced objects.
A lot of major changes were introduced with openSUSE Leap 42.3 (the switch to systemd and GRUB 2 for example). These changes also required fundamental changes in AutoYaST, therefore you cannot use AutoYaST control files created on previous openSUSE Leap versions to install openSUSE Leap 42.3 and vice versa.
The XML configuration format provides a consistent file structure, which is easy to learn and to remember when attempting to configure a new system.
The AutoYaST control file uses XML to describe the system installation
and configuration. XML is a commonly used markup, and many users are
familiar with the concepts of the language and the tools used to process
XML files. If you edit an existing control file or create a control file
using an editor from scratch, it is strongly recommended to validate the
control file. This can be done using a validating XML parser such as
xmllint or jing, for example
(see Section 3.3, “Creating/Editing a Control File Manually”).
The following example shows a control file in XML format:
<?xml version="1.0"?>
<!DOCTYPE profile>
<profile
xmlns="http://www.suse.com/1.0/yast2ns"
xmlns:config="http://www.suse.com/1.0/configns">
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">btrfs</filesystem>
<size>10G</size>
<mount>/</mount>
</partition>
<partition>
<filesystem config:type="symbol">xfs</filesystem>
<size>120G</size>
<mount>/data</mount>
</partition>
</partitions>
</drive>
</partitioning>
<scripts>
<pre-scripts>
<script>
<interpreter>shell</interpreter>
<filename>start.sh</filename>
<source>
<![CDATA[
#!/bin/sh
echo "Starting installation"
exit 0
]]>
</source>
</script>
</pre-scripts>
</scripts>
</profile>Below is an example of a basic control file container, the actual content of which is explained later on in this chapter.
<?xml version="1.0"?> <!DOCTYPE profile> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <!-- RESOURCES --> </profile>
The <profile> element (root node)
contains one or more distinct resource elements. The permissible
resource elements are specified in the schema files
A resource element either contains multiple and distinct property and resource elements, or multiple instances of the same resource element, or it is empty. The permissible content of a resource element is specified in the schema files.
A property element is either empty or contains a literal value. The permissible property elements and values in each resource element are specified in the schema files
An element can be either a container of other elements (a resource) or it has a literal value (a property); it can never be both. This restriction is specified in the schema files. A configuration component with more than one value must either be represented as an embedded list in a property value or as a nested resource.
Nested resource elements allow a tree-like structure of configuration components to be built to any level.
...
<drive>
<device>/dev/sda</device>
<partitions> <!-- this is wrong, explanation below -->
<partition>
<size>10G</size>
<mount>/</mount>
</partition>
<partition>
<size>1G</size>
<mount>/tmp</mount>
</partition>
</partitions>
</drive>
....In the example above the disk resource consists of a device property and a partitions resource. The partitions resource contains multiple instances of the partition resource. Each partition resource contains a size and mount property.
The XML schema defines the partitions element as a resource supporting
one or multiple partition element children. If only one partition
resource is specified, it is important to use the
config:type attribute of the partitions element to
indicate that the content is a resource, in this case a list. Using the
partitions element without specifying the type in this case will
result in undefined behavior, as YaST will incorrectly interpret the
partitions resource as a property. The example below illustrates this
use case.
...
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<size>10G</size>
<mount>/</mount>
</partition>
<partition>
<size>1G</size>
<mount>/tmp</mount>
</partition>
</partitions>
</drive>
....Global attributes are used to define metadata on resources and properties. Attributes are used to define context switching. They are also used for naming and typing properties as shown in the previous sections. Attributes are in a separate namespace so they do not need to be treated as reserved words in the default namespace.
Global attributes are defined in the configuration namespace and must
always be prefixed with config: . All attributes are
optional. Most can be used with both resource and property elements but
some can only be used with one type of element which is specified in
the schema files.
The type of an element is defined using the
config:type attribute. The type of a resource
element is always RESOURCE, although this can also be made explicit
with this attribute (to ensure correct identification of an empty
element, for example, when there is no schema file to refer to). A
resource element cannot be any other type and this restriction is
specified in the schema file. The type of a property element determines
the interpretation of its literal value. The type of a property element
defaults to STRING, as specified in the schema file.
The full set of permissible types is specified in the schema file.
To create the control file, you need to collect information about the systems you are going to install. This includes hardware data and network information among other things. Make sure you have the following information about the machines you want to install:
Hard disk types and sizes
Graphical interface and attached monitor, if any
Network interface and MAC address if known (for example, when using DHCP)
To create the control file for one or more computers, a configuration interface based on YaST is provided. This system depends on existing modules which are usually used to configure a computer in regular operation mode, for example, after openSUSE Leap is installed.
The configuration management system lets you easily create control files and manage a repository of configurations for the use in a networked environment with multiple clients.
With some exceptions, almost all resources of the control file can be configured using the configuration management system. The system offers flexibility and the configuration of some resources is identical to the one available in the YaST control center. In addition to the existing and familiar modules new interfaces were created for special and complex configurations, for example for partitioning, general options and software.
Furthermore, using a CMS guarantees the validity of the resulting control file and its direct use for starting automated installation.
Make sure the configuration system is installed (package
autoyast2) and call it using
the YaST control center or as root with the following command
(make sure the DISPLAY variable is set correctly to
start the graphical user interface instead of the text-based one):
/sbin/yast2 autoyast
If editing the control file manually, make sure it has a valid syntax. To
check the syntax, use the tools already available on the distribution. For
example, to verify that the file is well-formed (has a valid XML
structure), use the utility xmllint available with the
libxml2 package:
xmllint <control file>
If the control file is not well formed, for example, if a tag is not
closed, xmllint will report the errors.
To validate the control file, use the tool jing from the
package with the same name. During
validation misplaced or missing tags and attributes and wrong attribute
values are detected.
jing /usr/share/YaST2/schema/autoyast/rng/profile.rng <control file>
/usr/share/YaST2/schema/autoyast/rng/profile.rng is
provided by the package yast2-schema. This file
describes the syntax and classes of an AutoYaST profile.
Before going on with the autoinstallation, fix any errors resulting from such checks. The autoinstallation process cannot be started with an invalid and not well-formed control file.
You can use any XML editor available on your system or any text editor with XML support (for example, Emacs, Vim). However, it is not optimal to create the control file manually for many machines and it should only be seen as an interface between the autoinstallation engine and the Configuration Management System (CMS).
The built-in nxml-mode turns Emacs into a fully-fledged XML editor with automatic tag completion and validation. Refer to the Emacs help for instructions on how to set up nxml-mode.
If you have a template and want to change a few things via script or
command line, use an XSLT processor like xsltproc.
For example, if you have an AutoYaST control file and want to fill out
the host name via script for any reason (if doing this so often, you
want to script it)
First, create an XSL file
<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
xmlns:y2="http://www.suse.com/1.0/yast2ns"
xmlns:config="http://www.suse.com/1.0/configns"
xmlns="http://www.suse.com/1.0/yast2ns"
version="1.0">
<xsl:output method="xml" encoding="UTF-8" indent="yes" omit-xml-declaration="no" cdata-section-elements="source"/>
<!-- the parameter names -->
<xsl:param name="hostname"/>
<xsl:param name="domain"/>
<xsl:template match="/">
<xsl:apply-templates select="@*|node()"/>
</xsl:template>
<xsl:template match="y2:dns">
<xsl:copy>
<!-- where to copy the parameters -->
<domain><xsl:value-of select="string($domain)"/></domain>
<hostname><xsl:value-of select="string($hostname)"/></hostname>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
<xsl:template match="@*|node()" >
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>This file expects the host name and the domain name as parameters from the user.
<xsl:param name="hostname"/> <xsl:param name="domain"/>
There will be a copy of those parameters in the dns section of the control file. That means, if there already is a domain element in the dns section, you will get a second one which is not good.
For more information about XSLT, go to the official Web page www.w3.org/TR/xslt
This chapter introduces important parts of a control file for standard purposes. To learn about other available options, use the configuration management system.
Note that for some configuration options to work, additional packages need to be installed, depending on the software selection you have configured. If you choose to install a minimal system then some packages might be missing and need to be added to the individual package selection.
YaST will install packages required in the second phase of the
installation and before the post-installation phase of AutoYaST has
started. However, if necessary YaST modules are not available in the
system, important configuration steps will be skipped. For example, no
security settings will be configured if
yast2-security is not
installed.
General options include all the settings related to the installation process and the environment of the installed system.
The mode section configures the behavior of AutoYaST with regard to confirmation and rebooting. The following needs to be in the <general><mode> section.
By default, the user must confirm the auto-installation process. This
option allows the user to view and change the settings for a target
system before they are committed and can be used for debugging.
confirm is set to true by default
to avoid recursive installs when the system schedules a reboot after
initial system setup. Only disable confirmation if you want to carry out
a fully unattended installation.
With halt you cause AutoYaST to shut down the machine
after all packages have been installed. Instead of a reboot into stage
two, the machine is turned off. The boot loader is already installed and
all your chroot scripts have run.
final_halt and final_reboot halts
or reboots the machine after the installation and the configuration are
finished at the end of stage 2.
final_restart_services: After installation and
configuration are finished at the end of stage 2 all services will be
restarted by default. With this flag set to false no
restart will be done.
activate_systemd_default_target: After installation and
configuration are finished at the end of stage 2 the default target system
will be activated.
ntp_sync_time_before_installation specifies the NTP
server with which the system time needs to be synchronized before starting
the installation on the target system. It will not be synchronized if this
flag has not been used. Keep in mind that you need a reachable NTP
server and network connections while running the installation.
max_systemd_wait specifies how long AutoYaST waits at
most for systemd to set up the default target. Normally you do not need
to bother with this entry. If it is not preset, a reasonable default (30
seconds) is used.
Some openSUSE versions use the kexec feature and do not reboot
anymore between stage 1 and stage 2. With the forceboot
option you can force the reboot in case you need it for some reason. The
value true will reboot, false will
not reboot and a missing forceboot option uses the
product's default.
Some drivers, for example the proprietary drivers for Nvidia and ATI graphics cards, need a reboot and will not work properly when using kexec. Therefore the default on openSUSE Leap products is to always do a proper reboot.
|
Attribute |
Values |
Description |
|---|---|---|
|
|
If this boolean is set to <confirm config:type="boolean"> true </confirm> |
Optional. The default is |
|
|
Shuts down the machine after the first stage. So if you turn it on again, the machine boots and the second stage of the autoinstallation starts. <halt config:type="boolean"> true </halt> |
Optional. The default is |
|
|
This boolean determines if AutoYaST will run in the second stage too
(after the partitioning, software and boot loader installation of
the first stage). If you set this to <second_stage config:type="boolean"> true </second_stage> |
Optional. The default is |
|
|
If you set this to <final_reboot config:type="boolean"> true </final_reboot> |
Optional. The default is |
|
|
If you set this to <final_halt config:type="boolean"> true </final_halt> |
Optional. The default is |
|
|
If you set this to <confirm_base_product_license config:type="boolean"> true </confirm_base_product_license> |
Optional. The default is |
|
|
If you set this entry to <final_restart_services config:type="boolean"> false </final_restart_services> |
Optional. The default is |
|
|
If you set this entry to <activate_systemd_default_target config:type="boolean"> false </activate_systemd_default_target> |
Optional. The default is |
|
|
Some openSUSE releases use kexec to avoid the reboot after the first stage. They immediately boot into the installed system. You can force a reboot with this: <forceboot config:type="boolean"> true </forceboot> |
Optional. The default is |
|
|
This option allows to enable (set to <self_update config:type="boolean"> false <self_update/>
Alternatively, you can specify the boot parameter
|
Optional. The default is |
|
|
Location of the update repository to use during YaST self-update. Check out the Deployment Guide to find further information about this feature. Important: Installer Self-Update Repository Only
The <self_update_url> http://example.com/updates/$arch </self_update_url>
Alternatively, you can specify the boot parameter
The URL can contain a variable
Use the boot parameter |
AutoYaST allows you to configure the proposal screen with the
<proposals config:type="list"> option in the control file.
All proposals that are listed in that section are shown in the proposal
screen if you set the confirm option to
true. Proposals are also used during the regular
installation and can be found in the file
control.xml in the root directory of the
installation media.
<general>
<signature-handling>
<accept_unsigned_file config:type="boolean">true</accept_unsigned_file>
<accept_file_without_checksum config:type="boolean">true</accept_file_without_checksum>
<accept_verification_failed config:type="boolean">true</accept_verification_failed>
<accept_unknown_gpg_key config:type="boolean">true</accept_unknown_gpg_key>
<import_gpg_key config:type="boolean">true</import_gpg_key>
<accept_non_trusted_gpg_key config:type="boolean">true</accept_non_trusted_gpg_key>
</signature-handling>
<cio_ignore config:type="boolean">false</cio_ignore> <! -- IBM z Systems only -->
<mode>
<halt config:type="boolean">false</halt>
<forceboot config:type="boolean">false</forceboot>
<final_reboot config:type="boolean">false</final_reboot>
<final_halt config:type="boolean">false</final_halt>
<confirm_base_product_license config:type="boolean">false</confirm_base_product_license>
<confirm config:type="boolean">true</confirm>
<second_stage config:type="boolean">true</second_stage>
</mode>
<self_update_url>http://example.com/updates/$arch<self_update_url/>
<proposals config:type="list">
<proposal>partitions_proposal</proposal>
</proposals>
<wait>
<pre-modules config:type="list">
<module>
<name>networking</name>
<sleep>
<time config:type="integer">10</time>
<feedback config:type="boolean">true</feedback>
</sleep>
<script>
<source>sleep 5</source>
<debug config:type="boolean">false</debug>
</script>
</module>
</pre-modules>
<post-modules config:type="list">
<module>
<name>networking</name>
<sleep>
<time config:type="integer">3</time>
<feedback config:type="boolean">true</feedback>
</sleep>
<script>
<source>sleep 7</source>
<debug config:type="boolean">false</debug>
</script>
</module>
</post-modules>
</wait>
</general>
You can let AutoYaST sleep before and after each
module run during the second stage. You can run scripts and/or pass a
value (in seconds) for AutoYaST to sleep. In the example above AutoYaST
will sleep for 15 seconds (10+5) before the network configuration starts
and 10 seconds (3+7) after the network configuration is done. The
scripts in the example do not really make a lot of sense because you
could pass that value as “time” value too. They are only
used to show how scripts in the wait section work now.
To blacklist devices, use the flag cio_ignore.
This option is available on IBM z Systems only.
When installing on a network storage that is accessed via multiple
paths, you need to enable multipath for the installation with the
start_multipath parameter. This parameter must be placed
within the following XML structure:
<general>
<storage>
<start_multipath config:type="boolean">true</start_multipath>
</storage>
</general>
Alternatively, you can pass the following parameter to linuxrc:
LIBSTORAGE_MULTIPATH_AUTOSTART=ON
Installer updates are delivered using a dedicated repository, so the same security checks that are applied to other repositories and packages are performed on these updates.
The section /general/signature_handling can be used to
specify a different behavior.
<general>
<signature-handling>
<accept_unknown_gpg_key config:type="boolean">true</accept_unknown_gpg_key>
</signature-handling>
</general>
The report resource manages three types of pop-ups
that may appear during installation:
message pop-ups (usually non-critical, informative messages),
warning pop-ups (if something might go wrong),
error pop-ups (in case an error occurs).
<report>
<errors>
<show config:type="boolean">true</show>
<timeout config:type="integer">0</timeout>
<log config:type="boolean">true</log>
</errors>
<warnings>
<show config:type="boolean">true</show>
<timeout config:type="integer">10</timeout>
<log config:type="boolean">true</log>
</warnings>
<messages>
<show config:type="boolean">true</show>
<timeout config:type="integer">10</timeout>
<log config:type="boolean">true</log>
</messages>
<yesno_messages>
<show config:type="boolean">true</show>
<timeout config:type="integer">10</timeout>
<log config:type="boolean">true</log>
</yesno_messages>
</report>
Depending on your experience, you can skip, log and show (with timeout)
those messages. It is recommended to show all
messages with timeout. Warnings can be skipped in
some places but should not be ignored.
The default setting in auto-installation mode is to show errors without timeout and to show all warnings/messages with a timeout of 10 seconds.
Note that not all messages during installation are controlled by the
report resource. Some critical messages concerning
package installation and partitioning will show up ignoring your
settings in the report section. Usually those
messages will need to be answered with or
.
This documentation is for yast2-bootloader and applies
to GRUB 2. For older product versions shipping with legacy GRUB, refer to
the documentation that comes with your distribution in
/usr/share/doc/packages/autoyast2/
The general structure of the AutoYaST boot loader part looks like the following:
<bootloader>
<loader_type>
<!-- boot loader type (grub2 or grub2-efi) -->
</loader_type>
<global>
<!--
entries defining the installation settings for GRUB 2 and
the generic boot code
-->
</global>
<device_map config:type="list">
<!-- entries defining the order of devices -->
</device_map>
</bootloader>
Define which boot loader to use: grub2 or
grub2-efi.
<loader_type>grub2</loader_type>
This is an important if optional part. Define here where to install
GRUB 2 and how the boot process will work. Again,
yast2-bootloader proposes a configuration if you
do not define one. Usually the AutoYaST control file includes only this
part and all other parts are added automatically during installation by
yast2-bootloader. Unless you have some special
requirements, do not specify the boot loader configuration in the XML
file.
<global> <activate config:type="boolean">true</activate> <timeout config:type="integer">10</timeout> <suse_btrfs config:type="boolean">true</suse_btrfs> <terminal>gfxterm</terminal> <gfxmode>1280x1024x24</gfxmode> </global>
|
Attribute |
Description |
|---|---|
|
|
Set the boot flag on the boot partition. The boot partition can be
<activate config:type="boolean">true</activate> |
|
|
Kernel parameters added at the end of boot entries for normal and recovery mode. <append>nomodeset vga=0x317</append> |
|
|
Write GRUB 2 to a separate /boot partition. If no separate
/boot partition exists, GRUB 2 will be written to
<boot_boot>false</boot_boot> |
|
|
Write GRUB 2 to a custom device. <boot_custom>/dev/sda3</boot_custom> |
|
|
Write GRUB 2 to the extended partition (important if you want
to use a generic boot code and the <boot_extended>false</boot_extended> |
|
|
Write GRUB 2 to MBR of the first disk in the order (device.map includes order of disks). <boot_mbr>false</boot_mbr> |
|
|
Write GRUB 2 to <boot_root>false</boot_root> |
|
|
Write generic boot code to MBR, will be ignored if boot_mbr is set
to <generic_mbr config:type="boolean">false</generic_mbr> |
|
|
Graphical resolution of the GRUB 2 screen (requires
<terminal> to be set to <gfxmode>1280x1024x24</gfxmode> |
|
|
If set to <os_prober config:type="boolean">false</os_prober> |
|
|
If set to <suse_btrfs config:type="boolean">false</suse_btrfs> |
|
|
Command to execute if the GRUB 2 terminal mode is set to
<serial> serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1 </serials> |
|
|
Specify the GRUB 2 terminal mode to use, Valid entries are
<terminal>serial</terminal> |
|
|
The timeout in seconds until the default boot entry is booted automatically. <timeout config:type="integer">10</timeout> |
|
|
Adds the kernel parameter
<vgamode>0x317</vgamode> |
|
|
Kernel parameters added at the end of boot entries for Xen guests. <append>nomodeset vga=0x317</append> |
|
|
Kernel parameters added at the end of boot entries for Xen kernels on the VM Host Server. <xen-append>dom0_mem=768M</xen-append> |
GRUB 2 avoids mapping problems between BIOS drives and Linux devices by using device ID strings (UUIDs) or file system labels when generating its configuration files. GRUB 2 utilities create a temporary device map on the fly, which is usually sufficient, particularly on single-disk systems. However, if you need to override the automatic device mapping mechanism, create your custom mapping in this section.
<device_map config:type="list">
<device_map_entry>
<firmware>hd0</firmware> <!-- order of devices in target map -->
<linux>/dev/disk/by-id/ata-ST3500418AS_6VM23FX0</linux> <!-- name of device (disk) -->
</device_map_entry>
</device_map>
The elements listed below must be placed within the following XML structure:
<profile>
<partitioning config:type="list">
<drive>
...
</drive>
</partitioning>
</profile>|
Attribute |
Values |
Description |
|---|---|---|
|
|
The device you want to configure in this <drive>
section. You can use persistent device names via id, like
<device>/dev/sda</device> |
Optional. If left out, AutoYaST tries to guess the device. See Tip: Skipping Devices on how to influence guessing.
A RAID must always have
If set to |
|
|
If set to <initialize config:type="boolean">true</initialize> |
Optional. The default is |
|
|
A list of <partition> entries (see Section 4.4.2, “Partition Configuration”). <partitions config:type="list"> <partition>...</partition> ... </partitions> |
Optional. If no partitions are specified, AutoYaST will create a reasonable partitioning (see Section 4.4.5, “Automated Partitioning”). |
|
|
This value only makes sense with LVM. <pesize>8M</pesize> |
Optional. Default is 4M for LVM volume groups. |
|
|
Specifies the strategy AutoYaST will use to partition the hard disk. Choose between:
|
This parameter should be provided. |
|
|
Specify the type of the Choose between:
<type config:type="symbol">CT_LVM</type> |
Optional. Default is |
|
|
Describes the type of the partition table. Choose between:
<disklabel>gpt</disklabel> |
Optional. By default YaST decides what makes sense. If a partition table of a different type already exists, it will be recreated with the given type only if does not include any partition that should be kept or reused. |
|
|
This value only makes sense for type=CT_LVM drives. If you are
reusing a logical volume group and you set this to
<keep_unknown_lv config:type="boolean" >false</keep_unknown_lv> |
Optional. The default is |
|
|
Enables snapshots for Btrfs file systems. It does not apply to other kinds of file systems. <enable_snapshots config:type="boolean" >false</enable_snapshots> |
Optional. The default is |
You can influence AutoYaST's device-guessing for cases where you do not specify a <device> entry on your own. Usually AutoYaST would use the first device it can find that looks reasonable but you can configure it to skip some devices like this:
<partitioning config:type="list">
<drive>
<initialize config:type="boolean">true</initialize>
<skip_list config:type="list">
<listentry>
<!-- skip devices that use the usb-storage driver -->
<skip_key>driver</skip_key>
<skip_value>usb-storage</skip_value>
</listentry>
<listentry>
<!-- skip devices that are smaller than 1GB -->
<skip_key>size_k</skip_key>
<skip_value>1048576</skip_value>
<skip_if_less_than config:type="boolean">true</skip_if_less_than>
</listentry>
<listentry>
<!-- skip devices that are larger than 100GB -->
<skip_key>size_k</skip_key>
<skip_value>104857600</skip_value>
<skip_if_more_than config:type="boolean">true</skip_if_more_than>
</listentry>
</skip_list>
</drive>
</partitioning>
For a list of all possible <skip_key>, run yast2
ayast_probe on an already installed system.
The elements listed below must be placed within the following XML structure:
<drive>
<partitions config:type="list">
<partition>
...
</partition>
</partitions>
</drive>|
Attribute |
Values |
Description |
|---|---|---|
|
|
Specify if this partition or logical volume must be created or if it already exists. <create config:type="boolean" >false</create> |
If set to |
|
|
Partition will be encrypted. <crypt_fs config:type="boolean">false</crypt_fs> |
Default is |
|
|
Encryption key <crypt_key>xxxxxxxx</crypt_key> |
Only needed if |
|
|
The mount point of this partition. <mount>/</mount> <mount>swap</mount> |
You should have at least a root partition (/) and a swap partition. |
|
|
Mount options for this partition. <fstopt> ro,noatime,user,data=ordered,acl,user_xattr </fstopt> |
See |
|
|
The label of the partition (useful for the
<label>mydata</label> |
See |
|
|
The uuid of the partition (only useful for the
<uuid >1b4e28ba-2fa1-11d2-883f-b9a761bde3fb</uuid> |
See |
|
|
The size of the partition, for example 4G, 4500M, etc. The /boot
partition and the swap partition can have You can also specify the size in percentage. So 10% will use 10% of the size of the hard disk or volume group. You can mix auto, max, size, and percentage as you like. <size>10G</size> | |
|
|
Specify if AutoYaST should format the partition. <format config:type="boolean">false</format> |
If you set |
|
|
Specify the file system to use on this partition:
<filesystem config:type="symbol" >ext3</filesystem> |
Optional. The default is |
|
|
Specify an option string that is added to the mkfs command. <mkfs_options>-I 128</mkfs_options> |
Optional. Only use this when you know what you are doing. |
|
|
The partition number of this partition. If you have set
<partition_nr config:type="integer" >2</partition_nr> |
Usually, numbers 1 to 4 are primary partitions while 5 and higher are logical partitions. |
|
|
The <partition_id config:type="integer" >131</partition_id> |
The default is |
|
|
Instead of a partition number, you can tell AutoYaST to mount a
partition by <mountby config:type="symbol" >label</mountby> |
See |
|
|
List of subvolumes to create for a file system of type Btrfs. This key only makes sense for file systems of type Btrfs. See Section 4.4.3, “Btrfs subvolumes” for more information. <subvolumes config:type="list"> <path>tmp</path> <path>opt</path> <path>srv</path> <path>var/crash</path> <path>var/lock</path> <path>var/run</path> <path>var/tmp</path> <path>var/spool</path> ... </subvolumes> |
If no |
|
|
If this partition is on a logical volume in a volume group, specify
the logical volume name here (see the <lv_name>opt_lv</lv_name> | |
|
|
An integer that configures LVM striping. Specify across how many devices you want to stripe (spread data). <stripes config:type="integer">2</stripes> | |
|
|
Specify the size of each block in KB. <stripesize config:type="integer" >4</stripesize> | |
|
|
If this is a physical partition used by (part of) a volume group (LVM), you need to specify the name of the volume group here. <lvm_group>system</lvm_group> | |
|
|
<pool config:type="boolean">false</pool> | |
|
|
The name of the LVM thin pool that is used as a data store for this thin logical volume. If this is set to something non-empty, it implies that the volume is a so-called thin logical volume. <used_pool>my_thin_pool</used_pool> | |
|
|
If this physical volume is part of a RAID, specify the name of the RAID. <raid_name>/dev/md0</raid_name> | |
|
|
Specify the type of the RAID. <raid_type>raid1</raid_type> | |
|
|
Specify RAID options, see below. <raid_options>...</raid_options> | |
|
|
This boolean must be <resize config:type="boolean" >false</resize> |
Resizing only works with physical disks, not with LVM volumes. |
If size is set to auto for the
boot partition, AutoYaST will not create it unless is
strictly required in order to boot. If you want to force the creation of
a boot partition, you should specify the size using a number or a
percentage.
As mentioned in Section 4.4.2, “Partition Configuration”, it is possible to define a set of subvolumes for each Btrfs file system. In its simplest form, is just a list of entries:
<subvolumes config:type="list"> <path>tmp</path> <path>opt</path> <path>srv</path> <path>var/crash</path> <path>var/lock</path> <path>var/run</path> <path>var/tmp</path> <path>var/spool</path> </subvolumes>
AutoYaST supports disabling copy-on-write for a given subvolume. In that case, a slightly more complex syntax should be used:
<subvolumes config:type="list"> <listentry>tmp</listentry> <listentry>opt</listentry> <listentry>srv</listentry> <listentry> <path>var/lib/pgsql</path> <copy_on_write config:type="boolean">false<copy_on_write> </listentry> </subvolumes>
The following elements must be placed within the following XML structure:
<partition>
<raid_options>
...
</raid_options>
</partition>|
Attribute |
Values |
Description |
|---|---|---|
|
|
<chunk_size>4</chunk_size> | |
|
|
Possible values are:
For RAID6 and RAID10 the following values can be used:
<parity_algorithm >left_asymmetric</parity_algorithm> | |
|
|
Possible values are: <raid_type>raid1</raid_type> |
The default is |
|
|
This list contains the optional order of the physical devices: <device_order config:type="list"> <device>/dev/sdb2</device> <device>/dev/sda1</device> ... </device_order> |
This is optional and the default is alphabetical order. |
For automated partitioning, you only need to provide the sizes and mount points of partitions. All other data needed for successful partitioning is calculated during installation—unless provided in the control file.
If no partitions are defined and the specified drive is also the drive where the root partition should be created, the following partitions are created automatically:
/boot
The size of the /boot partition is determined by
the architecture of the target system.
swap
The size of the swap partition is determined by the amount of memory available in the system.
/ (root partition)
The size of the root partition is determined by the space left after
creating swap and /boot.
Depending on the initial status of the drive and how it was previously partitioned, it is possible to create the default partitioning in the following ways:
If the drive is already partitioned, it is possible to create the new partitions using the free space on the hard disk. This requires the availability of sufficient space for all selected packages in addition to swap.
Use this option to delete all existing partitions (Linux and non-Linux).
This option deletes all existing Linux partitions. Other partitions (for example Windows partitions) remain untouched. Note that this works only if the Linux partitions are at the end of the device.
This option allows you to select specific partitions to delete. Start the selection with the last available partition.
Repartitioning only works if the selected partitions are neighbors and located at the end of the device.
The value provided in the use property determines
how existing data and partitions are treated. The value
all means that the entire disk will be erased. Make
backups and use the confirm property if you need to
keep some partitions with important data. Otherwise, no pop-ups will
notify you about partitions being deleted.
If multiple drives are in the target system, identify all drives with their device names and specify how the partitioning should be performed.
Partition sizes can be given in gigabytes, megabytes or can be set to a
flexible value using the keywords auto and
max. max uses all available space
on a drive, therefore should only be set for the last partition on the
drive. With auto the size of a
swap or boot partition is
determined automatically, depending on the memory available and the
type of the system.
A fixed size can be given as shown below:
1GB, 1G,
1000MB, or 1000M will all create a
partition of the size 1 Gigabyte.
The following is an example of a single drive system, which is not pre-partitioned and should be automatically partitioned according to the described pre-defined partition plan. If you do not specify the device, it will be automatically detected.
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<use>all</use>
</drive>
</partitioning>A more detailed example shows how existing partitions and multiple drives are handled.
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<mount>/</mount>
<size>10G</size>
</partition>
<partition>
<mount>swap</mount>
<size>1G</size>
</partition>
</partitions>
</drive>
<drive>
<device>/dev/sdb</device>
<use>all</use>
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">ext4</filesystem>
<mount>/data1</mount>
<size>15G</size>
</partition>
<partition>
<filesystem config:type="symbol">jfs</filesystem>
<mount>/data2</mount>
<size>auto</size>
</partition>
</partitions>
<use>free</use>
</drive>
</partitioning>
Usually this is not needed because AutoYaST can delete partitions one by one automatically. But you need the option to let AutoYaST clear the partition table instead of deleting partitions individually.
Go to the drive section and add:
<initialize config:type="boolean">true</initialize>
With this setting AutoYaST will delete the partition table before it starts to analyze the actual partitioning and calculates its partition plan. Of course this means, that you cannot keep any of your existing partitions.
By default a file system to be mounted is identified in
/etc/fstab by the device name. This
identification can be changed so the file system is found by searching
for a UUID or a volume label. Note that not all file systems can be
mounted by UUID or a volume label. To specify how a partition is to be
mounted, use the mountby property which has the
symbol type. Possible options are:
device (default)
label
UUID
If you choose to mount the partition using a label, the name entered
for the label property is used as the volume label.
Add any valid mount option in the fourth field of
/etc/fstab. Multiple options are separated by
commas. Possible fstab options:
ro)
No write access to the file system. Default is
false.
noatime)
Access times are not updated when a file is read. Default is
false.
user)
The file system can be mounted by a normal user. Default is
false.
ordered,
journal, writeback)
journal
All data is committed to the journal prior to being written to the main file system.
ordered
All data is directly written to the main file system before its metadata is committed to the journal.
writeback
Data ordering is not preserved.
acl)Enable access control lists on the file system.
user_xattr)Allow extended user attributes on the file system.
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">ext4</filesystem>
<format config:type="boolean">true</format>
<fstopt>ro,noatime,user,data=ordered,acl,user_xattr</fstopt>
<mount>/local</mount>
<mountby config:type="symbol">uuid</mountby>
<partition_id config:type="integer">131</partition_id>
<size>10G</size>
</partition>
</partitions>In some cases you should leave partitions untouched and only format specific target partitions, rather than creating them from scratch. For example, if different Linux installations coexist, or you have another operating system installed, likely you do not want to wipe these out. You may also want to leave data partitions untouched.
Such scenarios require certain knowledge about the target systems and hard disks. Depending on the scenario, you might need to know the exact partition table of the target hard disk with partition ids, sizes and numbers. With this data you can tell AutoYaST to keep certain partitions, format others and create new partitions if needed.
The following example will keep partitions 1, 2 and 5 and delete partition 6 to create two new partitions. All remaining partitions will only be formatted.
<partitioning config:type="list">
<drive>
<device>/dev/sdc</device>
<partitions config:type="list">
<partition>
<create config:type="boolean">false</create>
<format config:type="boolean">true</format>
<mount>/</mount>
<partition_nr config:type="integer">1</partition_nr>
</partition>
<partition>
<create config:type="boolean">false</create>
<format config:type="boolean">false</format>
<partition_nr config:type="integer">2</partition_nr>
<mount>/space</mount>
</partition>
<partition>
<create config:type="boolean">false</create>
<format config:type="boolean">true</format>
<filesystem config:type="symbol">swap</filesystem>
<partition_nr config:type="integer">5</partition_nr>
<mount>swap</mount>
</partition>
<partition>
<format config:type="boolean">true</format>
<mount>/space2</mount>
<size>5G</size>
</partition>
<partition>
<format config:type="boolean">true</format>
<mount>/space3</mount>
<size>max</size>
</partition>
</partitions>
<use>6</use>
</drive>
</partitioning>The last example requires exact knowledge of the existing partition table and the partition numbers of those partitions that should be kept. In some cases however, such data may not be available, especially in a mixed hardware environment with different hard disk types and configurations. The following scenario is for a system with a non-Linux OS with a designated area for a Linux installation.
In this scenario, shown in figure Figure 4.1, “Keeping partitions”, AutoYaST will not create new partitions. Instead it searches for certain partition types on the system and uses them according to the partitioning plan in the control file. No partition numbers are given in this case, only the mount points and the partition types (additional configuration data can be provided, for example file system options, encryption and file system type).
<partitioning config:type="list">
<drive>
<partitions config:type="list">
<partition>
<create config:type="boolean">false</create>
<format config:type="boolean">true</format>
<mount>/</mount>
<partition_id config:type="integer">131</partition_id>
</partition>
<partition>
<create config:type="boolean">false</create>
<format config:type="boolean">true</format>
<filesystem config:type="symbol">swap</filesystem>
<partition_id config:type="integer">130</partition_id>
<mount>swap</mount>
</partition>
</partitions>
</drive>
</partitioning>partitioning Section
This section will be ignored if you have defined your own
partitioning section too.
This option will allow AutoYaST to use an existing
/etc/fstab and use the partition data from a
previous installation. All partitions are kept and no new partitions
are created. The partitions will be formatted and mounted as specified
in /etc/fstab on a Linux root partition.
Although the default behavior is to format all partitions, it is also
possible to leave some partitions (for
example data partitions) untouched and only mount them. If multiple installations are found on the
system (multiple root partitions with different
fstab files, the installation will abort, unless
the root partition is configured in the control file. The following
example illustrates how this option can be used:
/etc/fstab #<partitioning_advanced>
<fstab>
<!-- Read data from existing fstab. If multiple root partitions are
found, use the one specified below. Otherwise the first root
partition is taken -->
<!-- <root_partition>/dev/sda5</root_partition> -->
<use_existing_fstab config:type="boolean">true</use_existing_fstab>
<!-- all partitions found in fstab will be formatted and mounted
by default unless a partition is listed below with different
settings -->
<partitions config:type="list">
<partition>
<format config:type="boolean">false</format>
<mount>/bootmirror</mount>
</partition>
</partitions>
</fstab>
</partitioning_advanced>To configure LVM, first create a physical volume using the normal partitioning method described above.
The following example shows how to prepare for LVM in the
partitioning resource. A non-formatted partition is
created on device /dev/sda1 of the type
LVM and with the volume group
system. This partition will use all space available
on the drive.
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<create config:type="boolean">true</create>
<lvm_group>system</lvm_group>
<partition_type>primary</partition_type>
<partition_id config:type="integer">142</partition_id>
<partition_nr config:type="integer">1</partition_nr>
<size>max</size>
</partition>
</partitions>
<use>all</use>
</drive>
</partitioning><partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<lvm_group>system</lvm_group>
<partition_type>primary</partition_type>
<size>max</size>
</partition>
</partitions>
<use>all</use>
</drive>
<drive>
<device>/dev/system</device>
<is_lvm_vg config:type="boolean">true</is_lvm_vg>
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">ext4</filesystem>
<lv_name>user_lv</lv_name>
<mount>/usr</mount>
<size>15G</size>
</partition>
<partition>
<filesystem config:type="symbol">ext4</filesystem>
<lv_name>opt_lv</lv_name>
<mount>/opt</mount>
<size>10G</size>
</partition>
<partition>
<filesystem config:type="symbol">ext4</filesystem>
<lv_name>var_lv</lv_name>
<mount>/var</mount>
<size>1G</size>
</partition>
</partitions>
<pesize>4M</pesize>
<use>all</use>
</drive>
</partitioning>
It is possible to set the size to
max for the logical volumes. Of course, you can only
use max for one(!) logical volume. You cannot set
two logical volumes in one volume group to max.
Using AutoYaST, you can create and assemble software RAID devices. The supported RAID levels are the following:
This level increases your disk performance. There is no redundancy in this mode. If one of the drives crashes, data recovery will not be possible.
This mode offers the best redundancy. It can be used with two or more disks. An exact copy of all data is maintained on all disks. As long as at least one disk is still working, no data is lost. The partitions used for this type of RAID should have approximately the same size.
This mode combines management of a larger number of disks and still maintains some redundancy. This mode can be used on three disks or more. If one disk fails, all data is still intact. If two disks fail simultaneously, all data is lost.
This mode allows access to the same physical device via multiple controllers for redundancy against a fault in a controller card. This mode can be used with at least two devices.
As with LVM, you need to create all RAID partitions first and assign them to the RAID device you want to create afterward. Additionally you need to specify whether a partition or a device should be part of the RAID or if it should be a Spare device.
The following example shows a simple RAID1 configuration:
<partitioning config:type="list">
<drive>
<device>/dev/sda</device>
<partitions config:type="list">
<partition>
<partition_id config:type="integer">253</partition_id>
<format config:type="boolean">false</format>
<raid_name>/dev/md0</raid_name>
<raid_type>raid</raid_type>
<size>4G</size>
</partition>
<!-- Insert a configuration for the regular partitions located on
/dev/sda here (for example / and swap) -->
</partitions>
<use>all</use>
</drive>
<drive>
<device>/dev/sdb</device>
<partitions config:type="list">
<partition>
<format config:type="boolean">false</format>
<partition_id config:type="integer">253</partition_id>
<raid_name>/dev/md0</raid_name>
<raid_type>raid</raid_type>
<size>4gb</size>
</partition>
</partitions>
<use>all</use>
</drive>
<drive>
<device>/dev/md</device>
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">ext4</filesystem>
<format config:type="boolean">true</format>
<mount>/space</mount>
<partition_id config:type="integer">131</partition_id>
<partition_nr config:type="integer">0</partition_nr>
<raid_options>
<chunk_size>4</chunk_size>
<parity_algorithm>left-asymmetric</parity_algorithm>
<raid_type>raid1</raid_type>
</raid_options>
</partition>
</partitions>
<use>all</use>
</drive>
</partitioning>Keep the following in mind when configuring a RAID:
The device for raid is always /dev/md
The property partition_nr is used to determine the
MD device number. If partition_nr is equal to 0,
then /dev/md0 is configured.
All RAID-specific options are contained in the
raid_options resource.
The elements listed below must be placed within the following XML structure:
<dasd> <devices config:type="list"> <listentry> ... </listentry> </devices> </dasd>
tags in the <profile> section. Each disk needs to be configured in a separate <listentry> ... </listentry> section.
|
Attribute |
Description |
Values |
Description |
Values |
|---|---|---|---|---|
|
|
<device >DASD</dev_name> | |||
|
|
The device ( <dev_name >/dev/dasda</dev_name> |
Optional but recommended. If left out, AutoYaST tries to guess the device. | ||
|
|
Channel by which the disk is accessed. <channel>0.0.0150</channel> |
Mandatory. | ||
|
|
Enable or disable the use of <diag config:type="boolean">true</diag> |
Optional. |
The following elements must be placed within the following XML structure:
<profile>
<zfcp>
<devices config:type="list">
<listentry>
...
</listentry>
</devices>
</zfcp>
<profile>
Each disk needs to be configured in a separate
listentry section.
|
Attribute |
Values |
|---|---|
|
|
Channel number <controller_id >0.0.fc00</controller_id> |
Using the iscsi-client resource, you can configure
the target machine as an iSCSI client.
<iscsi-client>
<initiatorname>iqn.2013-02.de.suse:01:e229358d2dea</initiatorname>
<targets config:type="list">
<listentry>
<authmethod>None</authmethod>
<portal>192.168.1.1:3260</portal>
<startup>onboot</startup>
<target>iqn.2001-05.com.doe:test</target>
<iface>default</iface>
</listentry>
</targets>
<version>1.0</version>
</iscsi-client>|
Attribute |
Description | |
|---|---|---|
|
initiatorname |
| |
|
version |
Version of the YaST module. Default: 1.0 | |
|
targets |
List of targets. Each entry contains:
|
Using the fcoe_cfg resource, you can configure
a Fibre Channel over Ethernet (FCoE).
<fcoe-client>
<fcoe_cfg>
<DEBUG>no</DEBUG>
<USE_SYSLOG>yes</USE_SYSLOG>
</fcoe_cfg>
<interfaces config:type="list">
<listentry>
<dev_name>eth3</dev_name>
<mac_addr>01:000:000:000:42:42</mac_addr>
<device>Gigabit 1313</device>
<vlan_interface>200</vlan_interface>
<fcoe_vlan>eth3.200</fcoe_vlan>
<fcoe_enable>yes</fcoe_enable>
<dcb_required>yes</dcb_required>
<auto_vlan>no</auto_vlan>
<dcb_capable>no</dcb_capable>
<cfg_device>eth3.200</cfg_device>
</listentry>
</interfaces>
<service_start>
<fcoe config:type="boolean">true</fcoe>
<lldpad config:type="boolean">true</lldpad>
</service_start>
</fcoe-client>|
Attribute |
Description |
Values |
|---|---|---|
|
fcoe_cfg |
|
yes/no |
|
interfaces |
List of network cards including the status of VLAN and FCoE configuration. | |
|
service_start |
Enable or disable the start of the services fcoe and lldpad at boot time. Starting the service fcoe means starting the Fibre Channel over Ethernet service daemon fcoemon which controls the FCoE interfaces and establishes a connection with the daemon lldpad. The lldpad service provides the Link Layer Discovery Protocol agent daemon lldpad that informs fcoemon about DCB (Data Center Bridging) features and configuration of the interfaces. |
yes/no |
Language, timezone, and keyboard settings.
<language>
<language>en_GB</language>
<languages>de_DE,en_US</languages>
</language>|
Attribute |
Description |
Values |
|---|---|---|
|
|
Primary language |
A list of available languages can be found under
|
|
|
Secondary languages separated by commas |
A list of available languages can be found under
|
If the configured value for the primary language is unknown, it will be reset
to the default, en_US.
<timezone>
<hwclock>UTC</hwclock>
<timezone>Europe/Berlin</timezone>
</timezone>|
Attribute |
Description |
Values |
|---|---|---|
|
hwclock |
Whether the hardware clock uses local time or UTC |
localtime/UTC |
|
timezone |
Timezone |
A list of available timezones can be found under
|
<keyboard>
<keymap>german</keymap>
</keyboard>|
Attribute |
Description |
Values |
|---|---|---|
|
keymap |
Keyboard layout |
A list of available keymaps can be found in
|
Starting with SLE 15, all products are distributed using one medium.
You need to choose which product to install. To do so explicitly, use
the product option.
<software>
<products config:type="list">
<product>SLED15</product>
</products>
</software>In special cases, the medium might contain only one product. If so, you do not need to select a product explicitly as described above. AutoYaST will select the only available product automatically.
For backward compatibility with profiles created for pre-SLE 15
products, AutoYaST implements a heuristic that selects products
automatically. This heuristic will be used when the profile does not
contain a product element.
This heuristic is based on the package and pattern selection in the
profile. However, whenever possible avoid using this mechanism and adapt
old profiles to use explicit product selection.
Patterns or packages are configured like this:
<software>
<patterns config:type="list">
<pattern>directory_server</pattern>
</patterns>
<packages config:type="list">
<package>apache</package>
<package>postfix</package>
</packages>
<do_online_update config:type="boolean">true</do_online_update>
</software>The values are real package or pattern names. If the package name has been changed due an upgrade, you will have to adapt these settings too.
You can use images during installation to speed up the installation.
<!-- note! this is not in the software section! --> <deploy_image> <image_installation config:type="boolean">false</image_installation> </deploy_image>
In addition to the packages available for installation on the DVD-ROMs, you can add external packages including customized kernels. Customized kernel packages must be compatible to the SUSE packages and must install the kernel files to the same locations.
Unlike in earlier in versions, you do not need a special resource in the control file to install custom and external packages. Instead you need to re-create the package database and update it with any new packages or new package versions in the source repository.
A script is provided for this task which will query packages available
in the repository and create the package database. Use the command
/usr/bin/create_package_descr. It can be found in
the inst-source-utils package in the openSUSE Build Service.
When creating the database, all languages will be reset to English.
The unpacked DVD is located in /usr/local/DVDs/LATEST.
tux >cp /tmp/inst-source-utils-2016.7.26-1.2.noarch.rpm /usr/local/DVDs/LATEST/suse/noarchtux >cd /usr/local/DVDs/LATEST/susetux >create_package_descr -d /usr/local/CDs/LATEST/suse
In the above example, the directory
/usr/local/CDs/LATEST/suse contains the
architecture dependent (for example x86_64) and
architecture independent packages (noarch). This
might look different on other architectures.
The advantage of this method is that you can keep an up-to-date repository with fixed and updated package. Additionally this method makes the creation of custom CD-ROMs easier.
To add your own module such as the SDK (SUSE Software Development Kit),
add a file add_on_products.xml to the installation source
in the root directory.
The following example shows how the SDK module can be added to the base product
repository. The complete SDK repository will be stored in the directory
/sdk.
add_on_products.xml
#This file describes an SDK module included in the base product.
<?xml version="1.0"?>
<add_on_products xmlns="http://www.suse.com/1.0/yast2ns"
xmlns:config="http://www.suse.com/1.0/configns">
<product_items config:type="list">
<product_item>
<name>SUSE Linux Enterprise Software Development Kit</name>
<url>relurl:////sdk?alias=SLE_SDK</url>
<path>/</path>
<-- Users are asked whether to add such a product -->
<ask_user config:type="boolean">false</ask_user>
<-- Defines the default state of pre-selected state in case of ask_user used. -->
<selected config:type="boolean">true</selected>
</product_item>
</product_items>
</add_on_products>
While a normal installation now the SDK module will be installed automatically.
This will be not done via an AutoYaST installation. An additional entry would be needed for
in the AutoYaST control file add-on section.
Besides this special case all other modules, extensions and add-on products can be added from almost every other locations while an AutoYaST installation.
<add-on>
<add_on_products config:type="list">
<listentry>
<media_url>cd:///sdk</media_url>
<product>sle-sdk</product>
<alias>SLES SDK</alias>
<product_dir>/</product_dir>
<priority config:type="integer">20</priority>
<ask_on_error config:type="boolean">false</ask_on_error>
<confirm_license config:type="boolean">false</confirm_license>
<name>SUSE Linux Enterprise Software Development Kit</name>
</listentry>
</add_on_products>
</add-on>|
Attribute |
Values |
|---|---|
|
|
Product URL. Can have the prefix |
|
|
Internal product name if the add-on is a product.
The command |
|
|
Repository alias name. Defined by the user. |
|
|
Additional subpath. Optional. |
|
|
Sets the repository libzypp priority. Priority of 1 is the highest. The higher the number the lower the priority. Default is 99. |
|
|
AutoYaST can ask the user to make add-on products, modules or extensions available
instead of reporting a time-out error when no repository can be found at
the given location. Set ask_on_error to |
|
|
The user has to confirm the license. Default is |
|
|
Repository name.
The command |
To use unsigned installation sources with AutoYaST, turn off the checks with the following configuration in your AutoYaST control file.
The elements listed below must be placed within the following XML structure:
<general>
<signature-handling>
...
</signature-handling>
</general>
Default values for all options are false. If an
option is set to false and a package or repository
fails the respective test, it is silently ignored and will not be
installed. Note that setting any of these options to
true is a potential security risk. Never do it when
using packages or repositories from third party sources.
|
Attribute |
Values |
|---|---|
|
|
If set to <accept_unsigned_file config:type="boolean" >true</accept_unsigned_file> |
|
|
If set to <accept_file_without_checksum config:type="boolean" >true</accept_file_without_checksum> |
|
|
If set to <accept_verification_failed config:type="boolean" >true</accept_verification_failed> |
|
|
If set to <accept_unknown_gpg_key config:type="boolean" >true</accept_unknown_gpg_key> |
|
|
Set this option to <accept_non_trusted_gpg_key config:type="boolean" >true</accept_non_trusted_gpg_key> |
|
|
If set to <import_gpg_key config:type="boolean" >true</import_gpg_key> |
It is possible to configure the signature handling for each add-on
product, module, or extension individually. The following elements must
be between the signature-handling section of the
individual add-on product, module, or extension. All settings are
optional. If not configured, the global signature-handling from the
general section is used.
|
Attribute |
Values |
|---|---|
|
|
If set to <accept_unsigned_file config:type="boolean" >true</accept_unsigned_file> |
|
|
If set to <accept_file_without_checksum config:type="boolean" >true</accept_file_without_checksum> |
|
|
If set to <accept_verification_failed config:type="boolean" >true</accept_verification_failed> |
|
|
If <accept_unknown_gpg_key> <all config:type="boolean">true</all> </accept_unknown_gpg_key> Otherwise you can define single keys too. <accept_unknown_gpg_key>
<all config:type="boolean">false</all>
<keys config:type="list">
<keyid>3B3011B76B9D6523</keyid>
</keys>
</accept_unknown_gpg_key> |
|
|
This means, the key is known, but it is not trusted by you. You can trust all keys by adding: <accept_non_trusted_gpg_key> <all config:type="boolean">true</all> </accept_non_trusted_gpg_key> Or you can trust specific keys: <accept_non_trusted_gpg_key>
<all config:type="boolean">false</all>
<keys config:type="list">
<keyid>3B3011B76B9D6523</keyid>
</keys>
</accept_non_trusted_gpg_key> |
|
|
If <import_gpg_key> <all config:type="boolean">true</all> </import_gpg_key> This can be done for specific keys only: <import_gpg_key>
<all config:type="boolean">false</all>
<keys config:type="list">
<keyid>3B3011B76B9D6523</keyid>
</keys>
</import_gpg_key> |
Kernel packages are not part of any selection. The required kernel is determined during installation. If the kernel package is added to any selection or to the individual package selection, installation will mostly fail because of conflicts.
To force the installation of a specific kernel, use the
kernel property. The following is an example of
forcing the installation of the default kernel. This kernel will be
installed even if an SMP or other kernel is required.
<software> <kernel>kernel-default</kernel> ... </software>
Some packages are selected automatically either because of a dependency or because it is available in a selection.
Removing these packages might break the system consistency, and it is not
recommended to remove basic packages unless a replacement which
provides the same services is provided. The best example for this case
are mail transfer agent (MTA) packages. By default,
postfix will be selected and installed. To use another MTA like sendmail, then
postfix can be removed from the list of selected package using a list
in the software resource. However, note that sendmail is not shipped
with openSUSE Leap. The following example shows how this can be
done:
<software>
<packages config:type="list">
<package>sendmail</package>
</packages>
<remove-packages config:type="list">
<package>postfix</package>
</remove-packages>
</software>Note that it is not possible to remove a package, that is part of a pattern (see Section 4.8.2, “Package Selection with Patterns and Packages Sections”). When specifying such a package for removal, the installation will fail with the following error message:
The package resolver run failed. Check
your software section in the AutoYaST profile.
By default all recommended packages/patterns will be installed.
To have a minimal installation which includes required
packages only, you can switch off this behavior with the flag
install_recommended. Note that this flag only affects
a fresh installation and will be ignored during an upgrade.
<software> <install_recommended config:type="boolean">false </install_recommended> </software>
Default: If this flag has not been set in the configuration file all
recommended packages and no
recommended pattern will be installed.
To install packages after the reboot during stage two, you can
use the post-packages element for that:
<software>
<post-packages config:type="list">
<package>yast2-cim</package>
</post-packages>
</software>
You can also install patterns in stage 2. Use the
post-patterns element for that:
<software>
<post-patterns config:type="list">
<pattern>apparmor</pattern>
</post-patterns>
</software>
You can perform an online update at the end of the installation. Set
the boolean do_online_update to
true. Of course this only makes sense if you add an
online update repository in the suse-register/customer-center section,
for example, or in a post-script. If the online update repository was
already available in stage one via the add-on section, then AutoYaST has
already installed the latest packages available. If a kernel update is
done via online-update, a reboot at the end of stage two is triggered.
<software> <do_online_update config:type="boolean">true</do_online_update> </software>
AutoYaST can also be used for doing a system upgrade. Besides upgrade packages, the following sections are supported too:
scripts/pre-scripts Running user scripts very early, before anything else
really happens.
add-on Defining an additional add-on product.
language Setting language.
timezone Setting timezone.
keyboard Setting keyboard.
software Installing additional software/patterns.
Removing installed packages.
suse_register Running registration process.
To control the upgrade process the following sections can be defined:
<upgrade>
<stop_on_solver_conflict config:type="boolean">true</stop_on_solver_conflict>
</upgrade>
<backup>
<sysconfig config:type="boolean">true</sysconfig>
<modified config:type="boolean">true</modified>
<remove_old config:type="boolean">true</remove_old>
</backup>|
Element |
Description |
Comment |
|---|---|---|
|
stop_on_solver_conflict |
Halt installation if there are package dependency issues. | |
|
modified |
Create backup of modified files. | |
|
sysconfig |
Create backup of | |
|
remove_old |
Remove backups from previous updates. |
To start the AutoYaST upgrade mode while booting the system, you
need to select the Installation menu item and use the
following boot parameters:
autoupgrade=1 autoyast=http://..
With the services-manager resource you can set the
default systemd target and specify in detail which system services you
want to start or deactivate.
The default-target property specifies the default
systemd target into which the system boots. Valid options are
graphical for a graphical login, or
multi-user for a console login.
The <enable config:type="list"> and <disable config:type="list"> let you explicitly enable or disable services.
<services-manager>
<default_target>multi-user</default_target>
<services>
<disable config:type="list">
<service>cups</service>
</disable>
<enable config:type="list">
<service>sshd</service>
</enable>
</services>
</services-manager>Network configuration is used to connect a single workstation to an Ethernet-based LAN or to configure a dial-up connection. More complex configurations (multiple network cards, routing, etc.) are also provided.
If the following setting is set to true, YaST
will keep network settings created during the installation (via Linuxrc)
and/or merge it with network settings from the AutoYaST control file (if
defined). AutoYaST settings have higher priority than already present
configuration files. YaST will write ifcfg-* files based on the
entries in the control file without removing old ones. If there is an
empty or no dns and routing section, YaST will keep already
existing values. Otherwise settings from the control file will be
applied.
<keep_install_network config:type="boolean">true</keep_install_network>
During the second stage, installation of additional packages will take
place before the network, as described in the profile, is configured.
keep_install_network is set by default to
true to ensure that a network is available
in case it is needed to install those packages. If all packages
are installed during the first stage and the network is not needed early
during the second one, setting keep_install_network
to false will avoid copying the configuration.
To configure network settings and activate networking automatically, one global resource is used to store the whole network configuration.
<networking>
<dns>
<dhcp_hostname config:type="boolean">true</dhcp_hostname>
<domain>site</domain>
<hostname>linux-bqua</hostname>
<nameservers config:type="list">
<nameserver>192.168.1.116</nameserver>
<nameserver>192.168.1.117</nameserver>
<nameserver>192.168.1.118</nameserver>
</nameservers>
<resolv_conf_policy>auto</resolv_conf_policy>
<searchlist config:type="list">
<search>example.com</search>
<search>example.net</search>
</searchlist>
<write_hostname config:type="boolean">false</write_hostname>
</dns>
<interfaces config:type="list">
<interface>
<bootproto>dhcp</bootproto>
<device>eth0</device>
<startmode>auto</startmode>
</interface>
<interface>
<bootproto>static</bootproto>
<broadcast>127.255.255.255</broadcast>
<device>lo</device>
<firewall>no</firewall>
<ipaddr>127.0.0.1</ipaddr>
<netmask>255.0.0.0</netmask>
<network>127.0.0.0</network>
<prefixlen>8</prefixlen>
<startmode>nfsroot</startmode>
<usercontrol>no</usercontrol>
</interface>
</interfaces>
<ipv6 config:type="boolean">true</ipv6>
<keep_install_network config:type="boolean">false</keep_install_network>
<managed config:type="boolean">false</managed> ###### NetworkManager?
<net-udev config:type="list">
<rule>
<name>eth0</name>
<rule>ATTR{address}</rule>
<value>00:30:6E:08:EC:80</value>
</rule>
</net-udev>
<s390-devices config:type="list">
<listentry>
<chanids>0.0.0800 0.0.0801 0.0.0802</chanids>
<type>qeth</type>
</listentry>
</s390-devices>
<routing>
<ipv4_forward config:type="boolean">false</ipv4_forward>
<ipv6_forward config:type="boolean">false</ipv6_forward>
<routes config:type="list">
<route>
<destination>192.168.2.1</destination>
<device>eth0</device>
<extrapara>foo</extrapara>
<gateway>-</gateway>
<netmask>-</netmask>
</route>
<route>
<destination>default</destination>
<device>eth0</device>
<gateway>192.168.1.1</gateway>
<netmask>-</netmask>
</route>
<route>
<destination>default</destination>
<device>lo</device>
<gateway>192.168.5.1</gateway>
<netmask>-</netmask>
</route>
</routes>
</routing>
</networking><interfaces config:type="list">
<interface>
<device>br0</device>
<bootproto>static</bootproto>
<bridge>yes</bridge>
<bridge_forwarddelay>0</bridge_forwarddelay>
<bridge_ports>eth0 eth1</bridge_ports>
<bridge_stp>off</bridge_stp>
<ipaddr>192.168.122.100</ipaddr>
<netmask>255.255.255.0</netmask>
<network>192.168.122.0</network>
<prefixlen>24</prefixlen>
<startmode>auto</startmode>
</interface>
<interface>
<device>eth0</device>
<bootproto>none</bootproto>
<startmode>hotplug</startmode>
</interface>
<interface>
<device>eth1</device>
<bootproto>none</bootproto>
<startmode>hotplug</startmode>
</interface>
</interfaces>Using IPv6 addresses in AutoYaST is fully supported. To disable IPv6 Address Support, set <ipv6 config:type="boolean">false</ipv6>
The following elements must be between the <net-udev>...</net-udev> tags.
|
Element |
Description |
Comment |
|---|---|---|
|
name |
Network interface name, for example |
required |
|
rule |
|
required |
|
value |
for example |
required |
The following elements must be between the <s390-devices>...</s390-devices> tags.
|
Element |
Description |
Comment |
|---|---|---|
|
type |
qeth, ctc or iucv | |
|
chanids |
channel ids separated by spaces <chanids>0.0.0700 0.0.0701 0.0.0702</chanids> | |
|
layer2 |
<layer2 config:type="boolean">true</layer2> |
boolean; default: false |
|
portname |
QETH port name (deprecated since openSUSE 42.2) | |
|
protocol |
CTC / LCS protocol, a small number (as a string) <protocol>1</protocol> |
optional |
|
router |
IUCV router/user |
Configure your Internet proxy (caching) settings.
Configure proxies for HTTP, HTTPS, and FTP with
http_proxy, https_proxy
and ftp_proxy, respectively. Addresses or names that
should be directly accessible need to be specified with
no_proxy (space separated values). If you are using
a proxy server with authorization, fill in
proxy_user and proxy_password,
<proxy> <enabled config:type="boolean">true</enabled> <ftp_proxy>http://192.168.1.240:3128</ftp_proxy> <http_proxy>http://192.168.1.240:3128</http_proxy> <no_proxy>www.example.com .example.org localhost</no_proxy> <proxy_password>testpw</proxy_password> <proxy_user>testuser</proxy_user> </proxy>
Using the nis resource, you can configure the target
machine as a NIS client. The following example shows a detailed
configuration using multiple domains.
<nis>
<nis_broadcast config:type="boolean">true</nis_broadcast>
<nis_broken_server config:type="boolean">true</nis_broken_server>
<nis_by_dhcp config:type="boolean">false</nis_by_dhcp>
<nis_domain>test.com</nis_domain>
<nis_local_only config:type="boolean">true</nis_local_only>
<nis_options></nis_options>
<nis_other_domains config:type="list">
<nis_other_domain>
<nis_broadcast config:type="boolean">false</nis_broadcast>
<nis_domain>domain.com</nis_domain>
<nis_servers config:type="list">
<nis_server>10.10.0.1</nis_server>
</nis_servers>
</nis_other_domain>
</nis_other_domains>
<nis_servers config:type="list">
<nis_server>192.168.1.1</nis_server>
</nis_servers>
<start_autofs config:type="boolean">true</start_autofs>
<start_nis config:type="boolean">true</start_nis>
</nis>You can configure the target machine as a NIS server. NIS Master Server and NIS Slave Server and a combination of both are available.
<nis_server>
<domain>mydomain.de</domain>
<maps_to_serve config:type="list">
<nis_map>auto.master</nis_map>
<nis_map>ethers</nis_map>
</maps_to_serve>
<merge_passwd config:type="boolean">false</merge_passwd>
<mingid config:type="integer">0</mingid>
<minuid config:type="integer">0</minuid>
<nopush config:type="boolean">false</nopush>
<pwd_chfn config:type="boolean">false</pwd_chfn>
<pwd_chsh config:type="boolean">false</pwd_chsh>
<pwd_srcdir>/etc</pwd_srcdir>
<securenets config:type="list">
<securenet>
<netmask>255.0.0.0</netmask>
<network>127.0.0.0</network>
</securenet>
</securenets>
<server_type>master</server_type>
<slaves config:type="list"/>
<start_ypbind config:type="boolean">false</start_ypbind>
<start_yppasswdd config:type="boolean">false</start_yppasswdd>
<start_ypxfrd config:type="boolean">false</start_ypxfrd>
</nis_server>|
Attribute |
Values |
Description |
|---|---|---|
|
|
NIS domain name. | |
|
|
List of maps which are available for the server. |
Values: auto.master, ethers, group, hosts, netgrp, networks, passwd, protocols, rpc, services, shadow |
|
|
Select if your passwd file should be merged with the shadow file (only possible if the shadow file exists). |
Value: true/false |
|
|
Minimum GID to include in the user maps. | |
|
|
Minimum UID to include in the user maps. | |
|
|
Do not push the changes to slave servers. (Useful if there are none). |
Value: true/false |
|
|
YPPWD_CHFN - allow changing the full name |
Value: true/false |
|
|
YPPWD_CHSH - allow changing the login shell |
Value: true/false |
|
|
YPPWD_SRCDIR - source directory for passwd data |
Default: |
|
|
List of allowed hosts to query the NIS server |
A host address will be allowed if network is equal to the bitwise AND of the host's address and the netmask. The entry with netmask 255.0.0.0 and network 127.0.0.0 must exist to allow connections from the local host. Entering netmask 0.0.0.0 and network 0.0.0.0 gives access to all hosts. |
|
|
Select whether to configure the NIS server as a master or a slave or not to configure a NIS server. |
Values: master, slave, none |
|
|
List of host names to configure as NIS server slaves. | |
|
|
This host is also a NIS client (only when client is configured locally). |
Value: true/false |
|
|
Also start the password daemon. |
Value: true/false |
|
|
Also start the map transfer daemon. Fast Map distribution; it will speed up the transfer of maps to the slaves. |
Value: true/false |
Using the auth-server resource, you can configure the
target machine as an LDAP server. The following example shows a detailed
configuration.
<auth-server>
<daemon>
<listeners config:type="list">
<listentry>ldap</listentry>
<listentry>ldapi</listentry>
</listeners>
<serviceEnabled>1</serviceEnabled>
<slp/>
</daemon>
<databases config:type="list">
<listentry>
<access config:type="list">
<listentry>
<access config:type="list">
<listentry>
<control/>
<level>write</level>
<type>self</type>
<value/>
</listentry>
<listentry>
<control/>
<level>auth</level>
<type>*</type>
<value/>
</listentry>
</access>
<target>
<attrs>userPassword</attrs>
</target>
</listentry>
<listentry>
<access config:type="list">
<listentry>
<control/>
<level>write</level>
<type>self</type>
<value/>
</listentry>
<listentry>
<control/>
<level>read</level>
<type>*</type>
<value/>
</listentry>
</access>
<target>
<attrs>shadowLastChange</attrs>
</target>
</listentry>
<listentry>
<access config:type="list">
<listentry>
<control/>
<level>read</level>
<type>self</type>
<value/>
</listentry>
<listentry>
<control/>
<level>none</level>
<type>*</type>
<value/>
</listentry>
</access>
<target>
<attrs>userPKCS12</attrs>
</target>
</listentry>
<listentry>
<access config:type="list">
<listentry>
<control/>
<level>read</level>
<type>*</type>
<value/>
</listentry>
</access>
<target/>
</listentry>
</access>
<checkpoint config:type="list">
<listentry>1024</listentry>
<listentry>5</listentry>
</checkpoint>
<directory>/var/lib/ldap</directory>
<entrycache>10000</entrycache>
<idlcache>30000</idlcache>
<indexes>
<cn>
<eq>1</eq>
<sub>1</sub>
</cn>
<displayName>
<eq>1</eq>
<sub>1</sub>
</displayName>
<gidNumber>
<eq>1</eq>
</gidNumber>
<givenName>
<eq>1</eq>
<sub>1</sub>
</givenName>
<mail>
<eq>1</eq>
</mail>
<member>
<eq>1</eq>
</member>
<memberUid>
<eq>1</eq>
</memberUid>
<objectclass>
<eq>1</eq>
</objectclass>
<sn>
<eq>1</eq>
<sub>1</sub>
</sn>
<uid>
<eq>1</eq>
<sub>1</sub>
</uid>
<uidNumber>
<eq>1</eq>
</uidNumber>
</indexes>
<rootdn>cn=Administrator,DC=corp,DC=Fabrikam,DC=COM,CN=Karen Berge</rootdn>
<rootpw>{SSHA}LCdgE3gNejqBogGI3ac1Xf4DOIVMSk9ZQg==</rootpw>
<suffix>DC=corp,DC=Fabrikam,DC=COM,CN=Karen Berge</suffix>
<type>hdb</type>
</listentry>
</databases>
<globals>
<allow config:type="list"/>
<disallow config:type="list"/>
<loglevel config:type="list">
<listentry>none</listentry>
</loglevel>
<tlsconfig>
<caCertDir/>
<caCertFile/>
<certFile/>
<certKeyFile/>
<crlCheck>0</crlCheck>
<crlFile/>
<verifyClient>0</verifyClient>
</tlsconfig>
</globals>
<schema config:type="list">
<listentry>
<definition>dn: cn=schema,cn=config
objectClass: olcSchemaConfig
......
.....
....
...
..
.
</definition>
<name>schema</name>
</listentry>
<listentry>
<includeldif>/etc/openldap/schema/core.ldif</includeldif>
</listentry>
<listentry>
<includeldif>/etc/openldap/schema/cosine.ldif</includeldif>
</listentry>
<listentry>
<includeldif>/etc/openldap/schema/inetorgperson.ldif</includeldif>
</listentry>
<listentry>
<includeschema>/etc/openldap/schema/rfc2307bis.schema</includeschema>
</listentry>
<listentry>
<includeschema>/etc/openldap/schema/yast.schema</includeschema>
</listentry>
</schema>
</auth-server>
Using the samba-client resource, you can configure a
membership of a workgroup, NT domain, or Active Directory domain.
<samba-client>
<disable_dhcp_hostname config:type="boolean">true</disable_dhcp_hostname>
<global>
<security>domain</security>
<usershare_allow_guests>No</usershare_allow_guests>
<usershare_max_shares>100</usershare_max_shares>
<workgroup>WORKGROUP</workgroup>
</global>
<winbind config:type="boolean">false</winbind>
</samba-client>|
Attribute |
Values |
Description |
|---|---|---|
|
|
Do not allow DHCP to change the host name. |
Value: true/false |
|
|
Kind of authentication regime (domain technology or Active Directory server (ADS)). |
Value: ADS/domain |
|
|
Sharing guest access is allowed. |
Value: No/Yes |
|
|
Max. number of shares from |
0 means that shares are not enabled. |
|
|
Workgroup or domain name. | |
|
|
Using winbind. |
Value: true/false |
Configuration of a simple Samba server.
<samba-server>
<accounts config:type="list"/>
<backend/>
<config config:type="list">
<listentry>
<name>global</name>
<parameters>
<security>domain</security>
<usershare_allow_guests>No</usershare_allow_guests>
<usershare_max_shares>100</usershare_max_shares>
<workgroup>WORKGROUP</workgroup>
</parameters>
</listentry>
</config>
<service>Disabled</service>
<trustdom/>
<version>2.11</version>
</samba-server>|
Attribute |
Values |
Description |
|---|---|---|
|
|
List of Samba accounts. | |
|
|
List of available back-ends |
Value: true/false |
|
|
Setting additional user defined parameters in |
The example shows parameters in the |
|
|
Samba service starts during boot. |
Value: Enabled/Disabled |
|
|
Trusted Domains. |
A map of two maps (keys: |
|
|
Samba version. |
Default: 2.11 |
The following is a simple example for an LDAP user authentication. NSS
and PAM will automatically be configured accordingly. Required data are
the name of the search base (base DN, for example,
dc=mydomain,dc=com) and the IP address of the LDAP
server.
<auth-client>
<sssd>yes</sssd>
<nssldap>no</nssldap>
<sssd_conf>
<sssd>
<config_file_version>2</config_file_version>
<services>nss, pam, sudo</services>
<domains>EXAMPLE</domains>
</sssd>
<auth_domains>
<domain>
<domain_name>EXAMPLE</domain_name>
<id_provider>ldap</id_provider>
<sudo_provider>ldap</sudo_provider>
<ldap_uri>ldap://example.com</ldap_uri>
<ldap_sudo_search_base>ou=sudoers,dc=example,dc=com</ldap_sudo_search_base>
</domain>
</auth_domains>
</sssd_conf>
</auth-client>
To use LDAP with native SSL (rather than TLS), add the
ldaps resource:
<auth-client>
<sssd_conf>
<auth_domains>
<domain>
<ldaps config:type="boolean">true</ldaps>
</domain>
</auth_domains>
</sssd_conf>
</auth-client>Configuring a system as an NFS client or an NFS server is can be done using the configuration system. The following examples show how both NFS client and server can be configured.
From openSUSE Leap 42.3 on, the structure of NFS client configuration has
changed. Some global configuration options were introduced:
enable_nfs4 to switch NFS4 support on/off and
idmapd_domain to define domain name for rpc.idmapd
(this only makes sense when NFS4 is enabled). Attention: the old
structure is not compatible with the new one and the control files with
an NFS section created on older releases will not work with newer
products.
<nfs>
<enable_nfs4 config:type="boolean">true</enable_nfs4>
<idmapd_domain>suse.cz</idmapd_domain>
<nfs_entries config:type="list">
<nfs_entry>
<mount_point>/home</mount_point>
<nfs_options>sec=krb5i,intr,rw</nfs_options>
<server_path>saurus.suse.cz:/home</server_path>
<vfstype>nfs4</vfstype>
</nfs_entry>
<nfs_entry>
<mount_point>/work</mount_point>
<nfs_options>defaults</nfs_options>
<server_path>bivoj.suse.cz:/work</server_path>
<vfstype>nfs</vfstype>
</nfs_entry>
<nfs_entry>
<mount_point>/mnt</mount_point>
<nfs_options>defaults</nfs_options>
<server_path>fallback.suse.cz:/srv/dist</server_path>
<vfstype>nfs</vfstype>
</nfs_entry>
</nfs_entries>
</nfs><nfs_server>
<nfs_exports config:type="list">
<nfs_export>
<allowed config:type="list">
<allowed_clients>*(ro,root_squash,sync)</allowed_clients>
</allowed>
<mountpoint>/home</mountpoint>
</nfs_export>
<nfs_export>
<allowed config:type="list">
<allowed_clients>*(ro,root_squash,sync)</allowed_clients>
</allowed>
<mountpoint>/work</mountpoint>
</nfs_export>
</nfs_exports>
<start_nfsserver config:type="boolean">true</start_nfsserver>
</nfs_server>Since openSUSE Leap 15, the NTP client profile has a new format and is not compatible with previous profiles. You need to update your NTP client profile used in prior openSUSE Leap versions to be compatible with version 15 and newer.
Following is an example of the NTP client configuration:
<ntp-client> <ntp_policy>auto</ntp_policy>1 <ntp_servers config:type="list"> <ntp_server> <address>cz.pool.ntp.org</address>2 <iburst config:type="boolean">false</iburst>3 <offline config:type="boolean">true</offline>4 </ntp_server> </ntp_servers> <ntp_sync>15</ntp_sync>5 </ntp-client>
The | |
URL of the time server or pool of time servers. | |
| |
| |
For |
For the mail configuration of the client, this module lets you create a detailed mail configuration. The module contains various options. We recommended you use it at least for the initial configuration.
<mail>
<aliases config:type="list">
<alias>
<alias>root</alias>
<comment></comment>
<destinations>foo</destinations>
</alias>
<alias>
<alias>test</alias>
<comment></comment>
<destinations>foo</destinations>
</alias>
</aliases>
<connection_type config:type="symbol">permanent</connection_type>
<fetchmail config:type="list">
<fetchmail_entry>
<local_user>foo</local_user>
<password>bar</password>
<protocol>POP3</protocol>
<remote_user>foo</remote_user>
<server>pop.foo.com</server>
</fetchmail_entry>
<fetchmail_entry>
<local_user>test</local_user>
<password>bar</password>
<protocol>IMAP</protocol>
<remote_user>test</remote_user>
<server>blah.com</server>
</fetchmail_entry>
</fetchmail>
<from_header>test.com</from_header>
<listen_remote config:type="boolean">true</listen_remote>
<local_domains config:type="list">
<domains>test1.com</domains>
</local_domains>
<masquerade_other_domains config:type="list">
<domain>blah.com</domain>
</masquerade_other_domains>
<masquerade_users config:type="list">
<masquerade_user>
<address>joe@test.com</address>
<comment></comment>
<user>joeuser</user>
</masquerade_user>
<masquerade_user>
<address>bar@test.com</address>
<comment></comment>
<user>foo</user>
</masquerade_user>
</masquerade_users>
<mta config:type="symbol">postfix</mta>
<outgoing_mail_server>test.com</outgoing_mail_server>
<postfix_mda config:type="symbol">local</postfix_mda>
<smtp_auth config:type="list">
<listentry>
<password>bar</password>
<server>test.com</server>
<user>foo</user>
</listentry>
</smtp_auth>
<use_amavis config:type="boolean">true</use_amavis>
<virtual_users config:type="list">
<virtual_user>
<alias>test.com</alias>
<comment></comment>
<destinations>foo.com</destinations>
</virtual_user>
<virtual_user>
<alias>geek.com</alias>
<comment></comment>
<destinations>bar.com</destinations>
</virtual_user>
</virtual_users>
</mail>This section is used for configuration of an Apache HTTP server.
For less experienced users, we would suggest to configure the Apache
server using the HTTP server YaST module. After that,
call the AutoYaST configuration module,
select the HTTP server YaST module and clone the
Apache settings. These settings can be exported via the menu
File.
<http-server>
<Listen config:type="list">
<listentry>
<ADDRESS/>
<PORT>80</PORT>
</listentry>
</Listen>
<hosts config:type="list">
<hosts_entry>
<KEY>main</KEY>
<VALUE config:type="list">
<listentry>
<KEY>DocumentRoot</KEY>
<OVERHEAD>
#
# Global configuration that will be applicable for all
# virtual hosts, unless deleted here or overriden elsewhere.
#
</OVERHEAD>
<VALUE>"/srv/www/htdocs"</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<OVERHEAD>
#
# Configure the DocumentRoot
#
</OVERHEAD>
<SECTIONNAME>Directory</SECTIONNAME>
<SECTIONPARAM>"/srv/www/htdocs"</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Options</KEY>
<OVERHEAD>
# Possible values for the Options directive are "None", "All",
# or any combination of:
# Indexes Includes FollowSymLinks SymLinksifOwnerMatch
# ExecCGI MultiViews
#
# Note that "MultiViews" must be named *explicitly*
# --- "Options All"
# does not give it to you.
#
# The Options directive is both complicated and important.
# Please see
# http://httpd.apache.org/docs/2.4/mod/core.html#options
# for more information.
</OVERHEAD>
<VALUE>None</VALUE>
</listentry>
<listentry>
<KEY>AllowOverride</KEY>
<OVERHEAD>
# AllowOverride controls what directives may be placed in
# .htaccess files. It can be "All", "None", or any combination
# of the keywords:
# Options FileInfo AuthConfig Limit
</OVERHEAD>
<VALUE>None</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<OVERHEAD>
# Controls who can get stuff from this server.
</OVERHEAD>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>!mod_access_compat.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Require</KEY>
<VALUE>all granted</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>mod_access_compat.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Order</KEY>
<VALUE>allow,deny</VALUE>
</listentry>
<listentry>
<KEY>Allow</KEY>
<VALUE>from all</VALUE>
</listentry>
</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>Alias</KEY>
<OVERHEAD>
# Aliases: aliases can be added as needed (with no limit).
# The format is Alias fakename realname
#
# Note that if you include a trailing / on fakename then the
# server will require it to be present in the URL. So "/icons"
# is not aliased in this example, only "/icons/". If the fakename
# is slash-terminated, then the realname must also be slash
# terminated, and if the fakename omits the trailing slash, the
# realname must also omit it.
# We include the /icons/ alias for FancyIndexed directory listings.
# If you do not use FancyIndexing, you may comment this out.
#
</OVERHEAD>
<VALUE>/icons/ "/usr/share/apache2/icons/"</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<OVERHEAD>
</OVERHEAD>
<SECTIONNAME>Directory</SECTIONNAME>
<SECTIONPARAM>"/usr/share/apache2/icons"</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Options</KEY>
<VALUE>Indexes MultiViews</VALUE>
</listentry>
<listentry>
<KEY>AllowOverride</KEY>
<VALUE>None</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>!mod_access_compat.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Require</KEY>
<VALUE>all granted</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>mod_access_compat.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Order</KEY>
<VALUE>allow,deny</VALUE>
</listentry>
<listentry>
<KEY>Allow</KEY>
<VALUE>from all</VALUE>
</listentry>
</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>ScriptAlias</KEY>
<OVERHEAD>
# ScriptAlias: This controls which directories contain server
# scripts. ScriptAliases are essentially the same as Aliases,
# except that documents in the realname directory are treated
# as applications and run by the server when requested rather
# than as documents sent to the client.
# The same rules about trailing "/" apply to ScriptAlias
# directives as to Alias.
#
</OVERHEAD>
<VALUE>/cgi-bin/ "/srv/www/cgi-bin/"</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<OVERHEAD>
# "/srv/www/cgi-bin" should be changed to wherever your
# ScriptAliased CGI directory exists, if you have that configured.
#
</OVERHEAD>
<SECTIONNAME>Directory</SECTIONNAME>
<SECTIONPARAM>"/srv/www/cgi-bin"</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>AllowOverride</KEY>
<VALUE>None</VALUE>
</listentry>
<listentry>
<KEY>Options</KEY>
<VALUE>+ExecCGI -Includes</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>!mod_access_compat.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Require</KEY>
<VALUE>all granted</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>mod_access_compat.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>Order</KEY>
<VALUE>allow,deny</VALUE>
</listentry>
<listentry>
<KEY>Allow</KEY>
<VALUE>from all</VALUE>
</listentry>
</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<OVERHEAD>
# UserDir: The name of the directory that is appended onto a
# user's home directory if a ~user request is received.
# To disable it, simply remove userdir from the list of modules
# in APACHE_MODULES in /etc/sysconfig/apache2.
#
</OVERHEAD>
<SECTIONNAME>IfModule</SECTIONNAME>
<SECTIONPARAM>mod_userdir.c</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>UserDir</KEY>
<OVERHEAD>
# Note that the name of the user directory ("public_html")
# cannot simply be changed here, since it is a compile time
# setting. The apache package would have to be rebuilt.
# You could work around by deleting /usr/sbin/suexec, but
# then all scripts from the directories would be executed
# with the UID of the webserver.
</OVERHEAD>
<VALUE>public_html</VALUE>
</listentry>
<listentry>
<KEY>Include</KEY>
<OVERHEAD>
# The actual configuration of the directory is in
# /etc/apache2/mod_userdir.conf.
</OVERHEAD>
<VALUE>/etc/apache2/mod_userdir.conf</VALUE>
</listentry>
</VALUE>
</listentry>
<listentry>
<KEY>IncludeOptional</KEY>
<OVERHEAD>
# Include all *.conf files from /etc/apache2/conf.d/.
#
# This is mostly meant as a place for other RPM packages to drop
# in their configuration snippet.
#
#
# You can comment this out here if you want those bits include
# only in a certain virtual host, but not here.
</OVERHEAD>
<VALUE>/etc/apache2/conf.d/*.conf</VALUE>
</listentry>
<listentry>
<KEY>IncludeOptional</KEY>
<OVERHEAD>
# The manual... if it is installed ('?' means it will not complain)
</OVERHEAD>
<VALUE>/etc/apache2/conf.d/apache2-manual?conf</VALUE>
</listentry>
<listentry>
<KEY>ServerName</KEY>
<VALUE>linux-wtyj</VALUE>
</listentry>
<listentry>
<KEY>ServerAdmin</KEY>
<OVERHEAD>
</OVERHEAD>
<VALUE>root@linux-wtyj</VALUE>
</listentry>
<listentry>
<KEY>NameVirtualHost</KEY>
<VALUE>192.168.43.2</VALUE>
</listentry>
</VALUE>
</hosts_entry>
<hosts_entry>
<KEY>192.168.43.2/secondserver.suse.de</KEY>
<VALUE config:type="list">
<listentry>
<KEY>DocumentRoot</KEY>
<VALUE>/srv/www/htdocs</VALUE>
</listentry>
<listentry>
<KEY>ServerName</KEY>
<VALUE>secondserver.suse.de</VALUE>
</listentry>
<listentry>
<KEY>ServerAdmin</KEY>
<VALUE>second_server@suse.de</VALUE>
</listentry>
<listentry>
<KEY>_SECTION</KEY>
<SECTIONNAME>Directory</SECTIONNAME>
<SECTIONPARAM>/srv/www/htdocs</SECTIONPARAM>
<VALUE config:type="list">
<listentry>
<KEY>AllowOverride</KEY>
<VALUE>None</VALUE>
</listentry>
<listentry>
<KEY>Require</KEY>
<VALUE>all granted</VALUE>
</listentry>
</VALUE>
</listentry>
</VALUE>
</hosts_entry>
</hosts>
<modules config:type="list">
<module_entry>
<change>enable</change>
<name>socache_shmcb</name>
<userdefined config:type="boolean">true</userdefined>
</module_entry>
<module_entry>
<change>enable</change>
<name>reqtimeout</name>
<userdefined config:type="boolean">true</userdefined>
</module_entry>
<module_entry>
<change>enable</change>
<name>authn_core</name>
<userdefined config:type="boolean">true</userdefined>
</module_entry>
<module_entry>
<change>enable</change>
<name>authz_core</name>
<userdefined config:type="boolean">true</userdefined>
</module_entry>
</modules>
<service config:type="boolean">true</service>
<version>2.9</version>
</http-server>|
List Name |
List Elements |
Description |
|---|---|---|
|
Listen |
List of host | |
|
PORT |
port address | |
|
ADDRESS |
Network address. All addresses will be taken if this entry is empty. | |
|
hosts |
List of Hosts configuration | |
|
KEY |
Host name; | |
|
VALUE |
List of different values describing the host. | |
|
modules |
Module list. Only user defined modules need to be described. | |
|
name |
Module name | |
|
userdefined |
For historical reasons, it is always set to | |
|
change |
For historical reasons, it is always set to |
|
Element |
Description |
Comment |
|---|---|---|
|
version |
Version of used Apache server |
Only for information. Default 2.9 |
|
service |
Enable Apache service |
Optional. Default: false |
To run an Apache server correctly, make sure the firewall is configured appropriately.
Squid is a caching and forwarding Web proxy.
<squid>
<acls config:type="list">
<listentry>
<name>QUERY</name>
<options config:type="list">
<option>cgi-bin \?</option>
</options>
<type>urlpath_regex</type>
</listentry>
<listentry>
<name>apache</name>
<options config:type="list">
<option>Server</option>
<option>^Apache</option>
</options>
<type>rep_header</type>
</listentry>
<listentry>
<name>all</name>
<options config:type="list">
<option>0.0.0.0/0.0.0.0</option>
</options>
<type>src</type>
</listentry>
<listentry>
<name>manager</name>
<options config:type="list">
<option>cache_object</option>
</options>
<type>proto</type>
</listentry>
<listentry>
<name>localhost</name>
<options config:type="list">
<option>127.0.0.1/255.255.255.255</option>
</options>
<type>src</type>
</listentry>
<listentry>
<name>to_localhost</name>
<options config:type="list">
<option>127.0.0.0/8</option>
</options>
<type>dst</type>
</listentry>
<listentry>
<name>SSL_ports</name>
<options config:type="list">
<option>443</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>80</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>21</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>443</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>70</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>210</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>1025-65535</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>280</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>488</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>591</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>Safe_ports</name>
<options config:type="list">
<option>777</option>
</options>
<type>port</type>
</listentry>
<listentry>
<name>CONNECT</name>
<options config:type="list">
<option>CONNECT</option>
</options>
<type>method</type>
</listentry>
</acls>
<http_accesses config:type="list">
<listentry>
<acl config:type="list">
<listentry>manager</listentry>
<listentry>localhost</listentry>
</acl>
<allow config:type="boolean">true</allow>
</listentry>
<listentry>
<acl config:type="list">
<listentry>manager</listentry>
</acl>
<allow config:type="boolean">false</allow>
</listentry>
<listentry>
<acl config:type="list">
<listentry>!Safe_ports</listentry>
</acl>
<allow config:type="boolean">false</allow>
</listentry>
<listentry>
<acl config:type="list">
<listentry>CONNECT</listentry>
<listentry>!SSL_ports</listentry>
</acl>
<allow config:type="boolean">false</allow>
</listentry>
<listentry>
<acl config:type="list">
<listentry>localhost</listentry>
</acl>
<allow config:type="boolean">true</allow>
</listentry>
<listentry>
<acl config:type="list">
<listentry>all</listentry>
</acl>
<allow config:type="boolean">false</allow>
</listentry>
</http_accesses>
<http_ports config:type="list">
<listentry>
<host/>
<port>3128</port>
<transparent config:type="boolean">false</transparent>
</listentry>
</http_ports>
<refresh_patterns config:type="list">
<listentry>
<case_sensitive config:type="boolean">true</case_sensitive>
<max>10080</max>
<min>1440</min>
<percent>20</percent>
<regexp>^ftp:</regexp>
</listentry>
<listentry>
<case_sensitive config:type="boolean">true</case_sensitive>
<max>1440</max>
<min>1440</min>
<percent>0</percent>
<regexp>^gopher:</regexp>
</listentry>
<listentry>
<case_sensitive config:type="boolean">true</case_sensitive>
<max>4320</max>
<min>0</min>
<percent>20</percent>
<regexp>.</regexp>
</listentry>
</refresh_patterns>
<service_enabled_on_startup config:type="boolean">true</service_enabled_on_startup>
<settings>
<access_log config:type="list">
<listentry>/var/log/squid/access.log</listentry>
</access_log>
<cache_dir config:type="list">
<listentry>ufs</listentry>
<listentry>/var/cache/squid</listentry>
<listentry>100</listentry>
<listentry>16</listentry>
<listentry>256</listentry>
</cache_dir>
<cache_log config:type="list">
<listentry>/var/log/squid/cache.log</listentry>
</cache_log>
<cache_mem config:type="list">
<listentry>8</listentry>
<listentry>MB</listentry>
</cache_mem>
<cache_mgr config:type="list">
<listentry>webmaster</listentry>
</cache_mgr>
<cache_replacement_policy config:type="list">
<listentry>lru</listentry>
</cache_replacement_policy>
<cache_store_log config:type="list">
<listentry>/var/log/squid/store.log</listentry>
</cache_store_log>
<cache_swap_high config:type="list">
<listentry>95</listentry>
</cache_swap_high>
<cache_swap_low config:type="list">
<listentry>90</listentry>
</cache_swap_low>
<client_lifetime config:type="list">
<listentry>1</listentry>
<listentry>days</listentry>
</client_lifetime>
<connect_timeout config:type="list">
<listentry>2</listentry>
<listentry>minutes</listentry>
</connect_timeout>
<emulate_httpd_log config:type="list">
<listentry>off</listentry>
</emulate_httpd_log>
<error_directory config:type="list">
<listentry/>
</error_directory>
<ftp_passive config:type="list">
<listentry>on</listentry>
</ftp_passive>
<maximum_object_size config:type="list">
<listentry>4096</listentry>
<listentry>KB</listentry>
</maximum_object_size>
<memory_replacement_policy config:type="list">
<listentry>lru</listentry>
</memory_replacement_policy>
<minimum_object_size config:type="list">
<listentry>0</listentry>
<listentry>KB</listentry>
</minimum_object_size>
</settings>
</squid>|
Attribute |
Values |
Description |
|---|---|---|
|
|
List of Access Control Settings (ACLs). |
Each list entry contains the name, type, and additional options. Use the YaST Squid configuration module to get an overview of possible entries. |
|
|
In the Access Control table, access can be denied or allowed to ACL Groups. |
If there are more ACL Groups in one definition, access will be allowed or denied to members who belong to all ACL Groups at the same time. The Access Control table is checked in the order listed here. The first matching entry is used. |
|
|
Define all ports where Squid will listen for clients' HTTP requests. |
|
|
|
Refresh patterns define how Squid treats the objects in the cache. |
The refresh patterns are checked in the order listed here. The first matching entry is used.
|
|
|
Map of all available general parameters with default values. |
Use the YaST Squid configuration module to get an overview about possible entries. |
|
|
Squid service start when booting. |
Value: true/false |
Configure your FTP Internet server settings.
<ftp-server>
<AnonAuthen>2</AnonAuthen>
<AnonCreatDirs>NO</AnonCreatDirs>
<AnonMaxRate>0</AnonMaxRate>
<AnonReadOnly>NO</AnonReadOnly>
<AntiWarez>YES</AntiWarez>
<Banner>Welcome message</Banner>
<CertFile/>
<ChrootEnable>NO</ChrootEnable>
<EnableUpload>YES</EnableUpload>
<FTPUser>ftp</FTPUser>
<FtpDirAnon>/srv/ftp</FtpDirAnon>
<FtpDirLocal/>
<GuestUser/>
<LocalMaxRate>0</LocalMaxRate>
<MaxClientsNumber>10</MaxClientsNumber>
<MaxClientsPerIP>3</MaxClientsPerIP>
<MaxIdleTime>15</MaxIdleTime>
<PasMaxPort>40500</PasMaxPort>
<PasMinPort>40000</PasMinPort>
<PassiveMode>YES</PassiveMode>
<SSL>0</SSL>
<SSLEnable>NO</SSLEnable>
<SSLv2>NO</SSLv2>
<SSLv3>NO</SSLv3>
<StartDaemon>2</StartDaemon>
<StartXinetd>YES</StartXinetd>
<TLS>YES</TLS>
<Umask/>
<UmaskAnon/>
<UmaskLocal/>
<VerboseLogging>NO</VerboseLogging>
<VirtualUser>NO</VirtualUser>
</ftp-server>|
Element |
Description |
Comment |
|---|---|---|
|
AnonAuthen |
Enable/disable anonymous and local users. |
Authenticated Users Only: 1; Anonymous Only: 0; Both: 2 |
|
AnonCreatDirs |
Anonymous users can create directories. |
Values: YES/NO |
|
AnonReadOnly |
Anonymous users can upload. |
Values: YES/NO |
|
AnonMaxRate |
The maximum data transfer rate permitted for anonymous clients. |
KB/s |
|
AntiWarez |
Disallow downloading of files that were uploaded but not validated by a local admin. |
Values: YES/NO |
|
Banner |
Specify the name of a file containing the text to display when someone connects to the server. | |
|
CertFile |
DSA certificate to use for SSL-encrypted connections |
This option specifies the location of the DSA certificate to use for SSL-encrypted connections. |
|
ChrootEnable |
When enabled, local users will be (by default) placed in a chroot jail in their home directory after login. |
Warning: This option has security implications. Values: YES/NO |
|
EnableUpload |
If enabled, FTP users can upload. |
To allow anonymous users to upload, enable |
|
FTPUser |
Defining anonymous FTP user. | |
|
FtpDirAnon |
FTP directory for anonymous users. |
Specify a directory which is used for FTP anonymous users. |
|
FtpDirLocal |
FTP directory for authenticated users. |
Specify a directory which is used for FTP authenticated users. |
|
LocalMaxRate |
The maximum data transfer rate permitted for local authenticated users. |
KB/s |
|
MaxClientsNumber |
The maximum number of clients allowed to connect. | |
|
MaxClientsPerIP |
Max clients for one IP. |
The maximum number of clients allowed to connect from the same source Internet address. |
|
MaxIdleTime |
The maximum time (timeout) a remote client may wait between FTP commands. |
Minutes |
|
PasMaxPort |
Maximum value for a port range for passive connection replies. |
|
|
PasMinPort |
Minimum value for a port range for passive connection replies. |
|
|
PassiveMode |
Enable Passive Mode |
Value: YES/NO |
|
SSL |
Security Settings |
Disable SSL/TLS: 0; Accept SSL and TLS: 1; Refuse Connections Without SSL/TLS: 2 |
|
SSLEnable |
If enabled, SSL connections are allowed. |
Value: YES/NO |
|
SSLv2 |
If enabled, SSL version 2 connections are allowed. |
Value: YES/NO |
|
SSLv3 |
If enabled, SSL version 3 connections are allowed. |
Value: YES/NO |
|
StartDaemon |
FTP daemon is started. |
Manually: 0; when booting: 1; via |
|
StartXinetd |
Has to be set to YES if StartDaemon is 2 |
Value: YES/NO |
|
TLS |
If enabled, TLS connections are allowed. |
Value: YES/NO |
|
Umask |
File creation mask. (umask for files):(umask for directories). |
For example |
|
UmaskAnon |
The value to which the umask for file creation is set for anonymous users. |
To specify octal values, remember the "0" prefix, otherwise the value will be treated as a base 10 integer. |
|
UmaskLocal |
Umask for authenticated users. |
To specify octal values, remember the "0" prefix, otherwise the value will be treated as a base 10 integer. |
|
VerboseLogging |
When enabled, all FTP requests and responses are logged. |
Value: YES/NO |
|
VirtualUser |
By using virtual users, FTP accounts can be administrated without affecting system accounts. |
Value: YES/NO |
Proper Firewall setting will be required for the FTP server to run correctly.
Configure your TFTP Internet server settings.
Use this to enable a server for TFTP (trivial file transfer protocol). The
server will be started using the systemd socket.
Note that TFTP and FTP are not the same.
<tftp-server>
<start_tftpd config:type="boolean">true</start_tftpd>
<tftp_directory>/tftpboot</tftp_directory>
</tftp-server>|
Element |
Description |
Comment |
|---|---|---|
|
start_tftpd |
Enabling TFTP server service. |
Value: true/false |
|
tftp_directory |
Boot Image Directory: Specify the directory where served files are located. |
The usual value is /tftpboot. The directory will be created if it does not exist. The server uses this as its root directory (using the -s option). |
The YaST firstboot utility (YaST Initial System Configuration), which runs after the installation is completed, lets you configure the before creation of the install image. On the first boot after configuration, users are then guided through a series of steps that allow for easier configuration of their desktops. YaST firstboot does not run by default and needs to be configured to run.
<firstboot>
<firstboot_enabled config:type="boolean">true</firstboot_enabled>
</firstboot>Using the features of this module, you can to change the local security settings on the target system. The local security settings include the boot configuration, login settings, password settings, user addition settings, and file permissions.
Configuring the security settings automatically is similar to the
Custom Settings in the security module available in
the running system. This allows you create a customized configuration.
See the reference for the meaning and the possible values of the settings in the following example.
<security> <console_shutdown>ignore</console_shutdown> <displaymanager_remote_access>no</displaymanager_remote_access> <fail_delay>3</fail_delay> <faillog_enab>yes</faillog_enab> <gid_max>60000</gid_max> <gid_min>101</gid_min> <gdm_shutdown>root</gdm_shutdown> <lastlog_enab>yes</lastlog_enab> <encryption>md5</encryption> <obscure_checks_enab>no</obscure_checks_enab> <pass_max_days>99999</pass_max_days> <pass_max_len>8</pass_max_len> <pass_min_days>1</pass_min_days> <pass_min_len>6</pass_min_len> <pass_warn_age>14</pass_warn_age> <passwd_use_cracklib>yes</passwd_use_cracklib> <permission_security>secure</permission_security> <run_updatedb_as>nobody</run_updatedb_as> <uid_max>60000</uid_max> <uid_min>500</uid_min> </security>
Change various password settings. These settings are mainly stored in
the /etc/login.defs file.
Use this resource to activate one of the encryption methods currently
supported. If not set, DES is configured.
DES, the Linux default method, works in all network
environments, but it restricts you to passwords no longer than eight
characters. MD5 allows longer passwords, thus
provides more security, but some network protocols do not support this,
and you may have problems with NIS. Blowfish is also
supported.
Additionally, you can set up the system to check for password plausibility and length etc.
Use the security resource, to change various boot settings.
When someone at the console has pressed the Ctrl–Alt–Del key combination, the system usually reboots. Sometimes it is desirable to ignore this event, for example, when the system serves as both workstation and server.
Configure a list of users allowed to shut down the machine from GDM.
Change various login settings. These settings are mainly stored in the
/etc/login.defs file.
useradd settings) #Set the minimum and maximum possible user and group ID
This module allows the configuration of the audit daemon and to add rules for the audit subsystem.
<audit-laf>
<auditd>
<flush>INCREMENTAL</flush>
<freq>20</freq>
<log_file>/var/log/audit/audit.log</log_file>
<log_format>RAW</log_format>
<max_log_file>5</max_log_file>
<max_log_file_action>ROTATE</max_log_file_action>
<name_format>NONE</name_format>
<num_logs>4</num_logs>
</auditd>
<rules/>
</audit-laf>|
Attribute |
Values |
Description |
|---|---|---|
|
|
Describes how to write the data to disk. |
If set to |
|
|
This parameter tells how many records to write before issuing an explicit flush to disk. |
The parameter |
|
|
The full path name to the log file. | |
|
|
How much information needs to be logged. |
Set |
|
|
How much information needs to be logged. |
Unit: Megabytes |
|
|
Number of log files. |
|
|
|
What happens if the log capacity has been reached. |
If the action is set to |
|
|
Computer Name Format describes how to write the computer name to the log file. |
If |
|
|
Rules for auditctl |
You can edit the rules manually, which we only recommend for
advanced users. For more information about all options, see
|
AutoYaST supports defining local users, groups, special login settings and even default options for new users. Those settings are defined in the following sections:
List of users
Default options for new users
List of groups
Special login settings like password-less login or autologin
Users and groups are set up during the first stage, so you can set up a usable system without running the second stage.
A list of users can be defined in the <users>
section. Take into account that at least the root users should be
set up so you can log in after the installation is finished.
<users config:type="list">
<user>
<username>root</username>
<user_password>password</user_password>
<encrypted config:type="boolean">false</encrypted>
</user>
<user>
<username>tux</username>
<user_password>password</user_password>
<encrypted config:type="boolean">false</encrypted>
</user>
</users>
The following example shows a more complex scenario. System wide default
setting from /etc/default/useradd, such as the
shell or the parent directory for the home directory, are applied.
<users config:type="list">
<user>
<username>root</username>
<user_password>password</user_password>
<uid>1001</uid>
<gid>100</gid>
<encrypted config:type="boolean">false</encrypted>
<fullname>Root User</fullname>
<authorized_keys config:type="list">
<listentry>command="/opt/login.sh" ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKLt1vnW2vTJpBp3VK91rFsBvpY97NljsVLdgUrlPbZ/L51FerQQ+djQ/ivDASQjO+567nMGqfYGFA/De1EGMMEoeShza67qjNi14L1HBGgVojaNajMR/NI2d1kDyvsgRy7D7FT5UGGUNT0dlcSD3b85zwgHeYLidgcGIoKeRi7HpVDOOTyhwUv4sq3ubrPCWARgPeOLdVFa9clC8PTZdxSeKp4jpNjIHEyREPin2Un1luCIPWrOYyym7aRJEPopCEqBA9HvfwpbuwBI5F0uIWZgSQLfpwW86599fBo/PvMDa96DpxH1VlzJlAIHQsMkMHbsCazPNC0++Kp5ZVERiH root@example.net</listentry>
</authorized_keys>
</user>
<user>
<username>tux</username>
<user_password>password</user_password>
<uid>1002</uid>
<gid>100</gid>
<encrypted config:type="boolean">false</encrypted>
<fullname>Plain User</fullname>
<home>/Users/plain</home>
<password_settings>
<max>120</max>
<inact>5</inact>
</password_settings>
</user>
</users>authorized_keys File will be overwritten
If the profile defines a set of SSH authorized keys for a user in the
authorized_keys section, an existing
$HOME/.ssh/authorized_keys file will be overwritten.
If not existing, the file will be created with the content specified.
Avoid overwriting an existing authorized_keys by not specifying the
respective section in the AutoYaST control file.
uid)
Each user on a Linux system has got a numeric user ID. You can either
specify such a user ID within the AutoYaST control file manually by using
uid, or let the system automatically choose a
user ID by not using uid.
User IDs should be unique throughout the system. If not, some
applications such as the login manager gdm may no longer work as expected.
When adding users with the AutoYaST control file, it is strongly recommended not to mix user defined IDs and automatically provided IDs. When doing so, unique IDs cannot be guaranteed. Either specify IDs for all users added with the AutoYaST control file or let the system choose the ID for all users.
|
Attribute |
Values |
Description |
|---|---|---|
|
|
Text <username>lukesw</username> |
Required. It should be a valid user name. Check |
|
|
Text <fullname>Tux Torvalds</fullname> |
Optional. User's fullname. |
|
|
Text <forname>Tux</forename> |
Optional. User's forename. |
|
|
Text <surname>Skywalkwer</surname> |
Optional. User's surname. |
|
|
Number <uid>1001</uid> |
Optional. User ID. It should be a unique and must be non-negative
number. If not specified, AutoYaST will automatically choose a user
ID. Also refer to Note: Specifying a user ID ( |
|
|
Number <gid>100</gid> |
Optional. Initial group ID. It must be a unique and non-negative number. Moreover it must refer to an existing group. |
|
|
Path <home>/home/luke</home> |
Optional. Absolute path to the user's home directory. By default,
|
|
|
Path <shell>/usr/bin/zsh</shell> |
Optional. |
|
|
Text <user_password>some-password</user_password> |
Optional. User's password can be written in plain text (not
recommended) or in encrypted form. Check
|
|
|
Boolean <encrypted config:type="boolean">true</encrypted> |
Optional. Considered |
|
|
Password settings <password_settings> <expire/> <max>60</max> <warn>7</warn> </password_settings> |
Optional. Some password settings can be customized:
|
|
|
List of authorized keys <authorized_keys config:type="list"> <listentry>ssh-rsa ...</listentry> </authorized_keys> |
A list of authorized keys to be written to |
The profile can specify a set of default values for new users like
password expiration, initial group, home directory prefix, etc. Besides using them
as default values for the users that are defined in the profile, AutoYaST will
write those settings to /etc/default/useradd to be read for
useradd.
|
Attribute |
Values |
Description |
|---|---|---|
|
|
Text <group>100</group> |
Optional. Default initial login group. |
|
|
Text <groups>users</groups> |
Optional. List of additional groups. |
|
|
Path <home>/home</home> |
Optional. User's home directory prefix. |
|
|
Date <expire>2017-12-31</expire> |
Optional. Default password expiration date in |
|
|
Number <inactive>3</inactive> |
Optional. Number of days after which an expired account is disabled. |
|
|
Boolean <no_groups config:type="boolean">true</no_groups> |
Optional. Do not use secondary groups. |
|
|
Path <shell>/usr/bin/fish</shell> |
Default login shell. |
|
|
Path <skel>/etc/skel</skel> |
Optional. Location of the files to be used as skel when adding a new
user. You can find more information in |
|
|
File creation mode mask <umask>022</umask> |
Set the file creation mode mask for the home directory. By default
|
A list of groups can be defined in <groups>
as shown in the example.
<groups>
<group>
<gid>100<gid>
<groupname>users</groupname>
<userlist>bob,alice</userlist>
</group>
</groups>|
Attribute |
Values |
Description |
|---|---|---|
|
|
Text <groupname>users</groupname> |
Required. It should be a valid groupname. Check |
|
|
Number <gid>100</gid> |
Optional. Group ID. It must be a unique and non-negative number. |
|
|
Text <group_password>password</group_password> |
Optional. The group's password can be written in plain text (not
recommended) or in encrypted form. Check the |
|
|
Boolean <encrypted config:type="boolean">true</encrypted> |
Optional. Indicates if the group's password in the profile is encrypted or not. |
|
|
Users list <userlist>bob,alice</userlist> |
Optional. A list of users who belong to the group. User names must be separated by commas. |
Two special login settings can be enabled through an AutoYaST profile: autologin and password-less login. Both of them are disabled by default.
<login_settings> <autologin_user>vagrant</autologin_user> <password_less_login config:type="boolean">true</password_less_login> </login_settings>
|
Attribute |
Values |
Description |
|---|---|---|
|
|
Boolean <password_less_login config:type="boolean">true</password_less_login> |
Optional. Enables password-less login. It only affects graphical login. |
|
|
Text <autologin_user>alice</autologin_user> |
Optional. Enables autologin for the given user. |
By adding scripts to the auto-installation process you can customize the installation according to your needs and take control in different stages of the installation.
In the auto-installation process, five types of scripts can be executed at different points in time during the installation:
All scripts need to be in the <scripts> section.
pre-scripts (very early, before anything else
really happens)
postpartitioning-scripts (after partitioning and
mounting to /mnt but before RPM installation)
chroot-scripts (after the package installation,
before the first boot)
post-scripts (during the first boot of the
installed system, no services running)
init-scripts (during the first boot of the
installed system, all services up and running)
Executed before YaST does any real change to the system (before partitioning and package installation but after the hardware detection).
You can use a pre-script to modify your control file and let AutoYaST
reread it. Find your control file in
/tmp/profile/autoinst.xml. Adjust the file and
store the modified version in
/tmp/profile/modified.xml. AutoYaST will read the
modified file after the pre-script finishes.
It is also possible to change the partitioning in your pre-script.
Pre-scripts are executed at an early stage of the installation. This
means if you have requested to confirm the installation, the
pre-scripts will be executed before the confirmation screen shows up
(profile/install/general/mode/confirm).
To call zypper in the pre-install script you will need to set the environment variable ZYPP_LOCKFILE_ROOT="/var/run/autoyast" to prevent conflicts with the running YaST process.
Pre-Install Script elements must be placed as follows:
<scripts>
<pre-scripts config:type="list">
<script>
...
</script>
</pre-scripts>
</scripts>
Executed after YaST has done the partitioning and written the
fstab. The empty system is already mounted to
/mnt.
Post-partitioning script elements must be placed as follows:
<scripts>
<postpartitioning-scripts config:type="list">
<script>
...
</script>
</postpartitioning-scripts>
</scripts>
Chroot scripts are executed before the machine reboots for the first
time. You can execute chroot scripts before the installation chroots
into the installed system and configures the boot loader or you can
execute a script after the chroot into the installed system has
happened (look at the chrooted parameter for that).
Chroot Environment script elements must be placed as follows:
<scripts>
<chroot-scripts config:type="list">
<script>
...
</script>
</chroot-scripts>
</scripts>These scripts are executed after AutoYaST has completed the system configuration and after it has booted the system for the first time.
It is possible to execute post scripts in an earlier phase while the
installation network is still up and before AutoYaST configures the
system. To run network-enabled post scripts, the boolean property
network_needed needs to be set to
true.
Post-install script elements must be placed as follows:
<scripts>
<post-scripts config:type="list">
<script>
...
</script>
</post-scripts>
</scripts>
These scripts are executed when YaST has finished, during the
initial boot process after the network has been initialized. These
final scripts are executed using
/usr/lib/YaST2/bin/autoyast-initscripts.sh and are
executed only once. Init scripts are configured using the tag
init-scripts.
The following elements must be between the <scripts><init-scripts config:type="list"><script> ... </script></init-scripts>...</scripts> tags
|
Element |
Description |
Comment |
|---|---|---|
|
|
Define a location from where the script gets fetched. Locations can be the same as for the profile (HTTP, FTP, NFS, etc.). <location >http://10.10.0.1/myInitScript.sh</location> |
Either <location> or <source> must be defined. |
|
|
The script itself (source code), encapsulated in a CDATA tag. If you do not want to put the whole shell script into the XML profile, use the location parameter. <source> <![CDATA[ echo "Testing the init script" > /tmp/init_out.txt ]]> </source> |
Either <location> or <source> must be defined. |
|
|
The file name of the script. It will be stored in a temporary
directory under <filename>mynitScript5.sh</filename> |
Optional in case you only have a single init script. The default
name ( |
|
|
A script is only run once. Even if you use ayast_setup to run an
XML file multiple times, the script is only run once. Change this
default behavior by setting this boolean to
<rerun config:type="boolean">true</rerun> |
Optional. Default is |
When added to the control file manually, scripts need to be included in a CDATA element to avoid confusion with the file syntax and other tags defined in the control file.
All XML elements described below can be used for each of the script
types described above. The only exceptions are
chrooted and
network_needed—they are only valid for
chroot and post-install scripts.
|
Element |
Description |
Comment |
|---|---|---|
|
|
Define a location from where the script gets fetched. Locations can be the same as for the control file (HTTP, FTP, NFS, etc.). <location >http://10.10.0.1/myPreScript.sh</location> |
Either |
|
|
The script itself (source code), encapsulated in a CDATA tag. If you do not want to put the whole shell script into the XML control file, refer to the location parameter. <source> <![CDATA[ echo "Testing the pre script" > /tmp/pre-script_out.txt ]]> </source> |
Either |
|
|
Specify the interpreter that must be used for the script. Supported options are shell and perl. <interpreter>perl</interpreter> |
Optional (default is |
|
|
The file name of the script. It will be stored in a temporary
directory under <filename>myPreScript5.sh</filename> |
Optional. Default is the type of the script (pre-scripts in this case). If you have more than one script, you should define different names for each script. |
|
|
If this boolean is <feedback config:type="boolean">true</feedback> |
Optional, default is |
|
|
This can be <feedback_type>warning</feedback_type> |
Optional, if missing, an always blocking pop-up is used. |
|
|
If this is <debug config:type="boolean">true</debug> |
Optional, default is |
|
|
This text will be shown in a pop-up for the time the script is running in the background. <notification>Please wait while script is running...</notification> |
Optional, if not configured, no notification pop-up will be shown. |
|
|
It is possible to specify parameters given to the script being
called. You may have more than one <param-list> <param>par1</param> <param>par2 par3</param> <param>"par4.1 par4.2"</param> </param-list> |
Optional, if not configured, no parameters get passed to script. |
|
|
A script is only run once. Even if you use
<rerun config:type="boolean">true</rerun> |
Optional, default is |
|
|
If set to <chrooted config:type="boolean" >true</chrooted> |
Optional, default is |
|
|
If set to <network_needed config:type="boolean" >true</network_needed> |
Optional, default is |
<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
<scripts>
<chroot-scripts config:type="list">
<script>
<chrooted config:type="boolean">true</chrooted>
<filename>chroot.sh</filename>
<interpreter>shell</interpreter>
<source><![CDATA[
#!/bin/sh
echo "Testing chroot (chrooted) scripts"
ls
]]>
</source>
</script>
<script>
<filename>chroot.sh</filename>
<interpreter>shell</interpreter>
<source><![CDATA[
#!/bin/sh
echo "Testing chroot scripts"
df
cd /mnt
ls
]]>
</source>
</script>
</chroot-scripts>
<post-scripts config:type="list">
<script>
<filename>post.sh</filename>
<interpreter>shell</interpreter>
<source><![CDATA[
#!/bin/sh
echo "Running Post-install script"
systemctl start portmap
mount -a 192.168.1.1:/local /mnt
cp /mnt/test.sh /tmp
umount /mnt
]]>
</source>
</script>
<script>
<filename>post.pl</filename>
<interpreter>perl</interpreter>
<source><![CDATA[
#!/usr/bin/perl
print "Running Post-install script";
]]>
</source>
</script>
</post-scripts>
<pre-scripts config:type="list">
<script>
<interpreter>shell</interpreter>
<location>http://192.168.1.1/profiles/scripts/prescripts.sh</location>
</script>
<script>
<filename>pre.sh</filename>
<interpreter>shell</interpreter>
<source><![CDATA[
#!/bin/sh
echo "Running pre-install script"
]]>
</source>
</script>
</pre-scripts>
<postpartitioning-scripts config:type="list">
<script>
<filename>postpart.sh</filename>
<interpreter>shell</interpreter>
<debug config:type="boolean">false</debug>
<feedback config:type="boolean">true</feedback>
<source><![CDATA[
touch /mnt/testfile
echo Hi
]]>
</source>
</script>
</postpartitioning-scripts>
</scripts>
</profile>
After installation is finished, the scripts and the output logs can be
found in the directory /var/adm/autoinstall. The
scripts are located in the subdirectory scripts
and the output logs in the log directory.
The log consists of the output produced when executing the shell scripts using the following command:
/bin/sh -x SCRIPT_NAME 2&>/var/adm/autoinstall/logs/SCRIPT_NAME.log
Using the sysconfig resource, it is possible to define configuration
variables in the sysconfig repository
(/etc/sysconfig) directly. Sysconfig variables,
offer the possibility to fine-tune many system components and
environment variables exactly to your needs.
The following example shows how a variable can be set using the sysconfig resource.
<sysconfig config:type="list" >
<sysconfig_entry>
<sysconfig_key>XNTPD_INITIAL_NTPDATE</sysconfig_key>
<sysconfig_path>/etc/sysconfig/xntp</sysconfig_path>
<sysconfig_value>ntp.host.com</sysconfig_value>
</sysconfig_entry>
<sysconfig_entry>
<sysconfig_key>HTTP_PROXY</sysconfig_key>
<sysconfig_path>/etc/sysconfig/proxy</sysconfig_path>
<sysconfig_value>proxy.host.com:3128</sysconfig_value>
</sysconfig_entry>
<sysconfig_entry>
<sysconfig_key>FTP_PROXY</sysconfig_key>
<sysconfig_path>/etc/sysconfig/proxy</sysconfig_path>
<sysconfig_value>proxy.host.com:3128</sysconfig_value>
</sysconfig_entry>
</sysconfig>
Both relative and absolute paths can be provided. If no absolute path
is given, it is treated as a sysconfig file under the
/etc/sysconfig directory.
For many applications and services you may have a
configuration file which should be copied to the appropriate location on
the installed system. For example, if you are installing a Web server, you may have a server configuration file
(httpd.conf).
Using this resource, you can embed the file into the control file by specifying the final path on the installed system. YaST will copy this file to the specified location.
This feature requires the autoyast2 package to be installed. If the package is missing, AutoYaST will automatically install the package if it is missing.
You can specify the file_location where the file
should be retrieved from. This can also be a location on the network
such as an HTTP server:
<file_location>http://my.server.site/issue</file_location>.
You can create directories by specifying a file_path
that ends with a slash.
<files config:type="list">
<file>
<file_path>/etc/apache2/httpd.conf</file_path>
<file_contents>
<![CDATA[
some content
]]>
</file_contents>
</file>
<file>
<file_path>/mydir/a/b/c/</file_path> <!-- create directory -->
</file>
</files>
A more advanced example is shown below. This configuration will create a
file using the content supplied in file_contents and
change the permissions and ownership of the file. After the file has
been copied to the system, a script is executed. This can be used to
modify the file and prepare it for the client's environment.
<files config:type="list">
<file>
<file_path>/etc/someconf.conf</file_path>
<file_contents>
<![CDATA[
some content
]]>
</file_contents>
<file_owner>tux.users</file_owner>
<file_permissions>444</file_permissions>
<file_script>
<interpreter>shell</interpreter>
<source>
<![CDATA[
#!/bin/sh
echo "Testing file scripts" >> /etc/someconf.conf
df
cd /mnt
ls
]]>
</source>
</file_script>
</file>
</files>
You have the option to let the user decide the values of specific parts
of the control file during the installation. If you use this feature, a
pop-up will ask the user to enter a specific part of the control file
during installation. If you want a full auto installation, but the user
should set the password of the local account, you can do this via the
ask directive in the control file.
The elements listed below must be placed within the following XML structure:
<general>
<ask-list config:type="list">
<ask>
...
</ask>
</ask-list>
</general>|
Element |
Description |
Comment |
|---|---|---|
|
|
The question you want to ask the user. <question>Enter the LDAP server</question> |
The default value is the path to the element (the path often looks strange, so we recommend entering a question). |
|
|
Set a preselection for the user. A text entry will be filled out with this value. A check box will be true or false and a selection will have the given value preselected. <default>dc=suse,dc=de</default> |
Optional. |
|
|
An optional help text that is shown on the left side of the question. <help>Enter the LDAP server address.</help> |
Optional. |
|
|
An optional title that is shown above the questions. <title>LDAP server</title> |
Optional. |
|
|
The type of the element you want to change. Possible values are
<type>symbol</type> |
Optional. The default is |
|
|
If this boolean is set to <password config:type="boolean">true</password> |
Optional. The default is |
|
|
A list of <pathlist config:type="list"> <path>networking,dns,hostname</path> <path>...</path> </pathlist> To change the
password of the first user in the control file, you need to set the
path to <users config:type="list">
<user>
<username>root</username>
<user_password>password to change</user_password>
<encrypted config:type="boolean">false</encrypted>
</user>
<user>
<username>tux</username>
<user_password>password to change</user_password>
<encrypted config:type="boolean">false</encrypted>
</user>
</users> |
This information is optional but you should at least provide
|
|
|
You can store the answer to a question in a file, to use it in one
of your scripts later. If you ask during
<file>/tmp/answer_hostname</file> |
This information is optional, but you should at least provide
|
|
stage |
Stage configures the installation stage in which the question pops
up. You can set this value to <stage>cont</stage> |
Optional. The default is |
|
|
The selection element contains a list of <selection config:type="list">
<entry>
<value>
btrfs
</value>
<label>
Btrfs File System
</label>
</entry>
<entry>
<value>
ext3
</value>
<label>
Extended3 File System
</label>
</entry>
</selection> |
Optional for |
|
|
You can ask more than one question per dialog. To do so, specify the dialog-id with an integer. All questions with the same dialog-id belong to the same dialog. The dialogs are sorted by the id too. <dialog config:type="integer">3</dialog> |
Optional. |
|
|
you can have more than one question per dialog. To make that possible you need to specify the element-id with an integer. The questions in a dialog are sorted by id. <element config:type="integer">1</element> |
Optional (see dialog). |
|
|
You can increase the default width of dialog. If there are multiple width specifications per dialog, the largest one is used. The number is roughly equivalent to the number of characters. <width config:type="integer">50</width> |
Optional. |
|
|
You can increase default height of dialog. If there are multiple height specifications per dialog, largest one is used. The number is roughly equivalent to number of lines. <height config:type="integer">15</height> |
Optional. |
|
|
You can have more than one question per dialog. Each question on a dialog has a frame that can have a frame title, a small caption for each question. You can put multiple elements into one frame. They need to have the same frame title. <frametitle>User data</frametitle> |
Optional. Default is no frame title. |
|
|
You can run scripts after a question has been answered (see the table below for detailed instructions about scripts). <script>...</script> |
Optional (default is no script). |
|
|
You can change the label on the button. The last element that specifies the label for a dialog wins. <ok_label>Finish</ok_label> |
Optional. |
|
|
You can change the label on the button. The last element that specifies the label for a dialog wins. <back_label>change values</back_label> |
Optional. |
|
|
You can specify an integer here that is used as timeout in seconds. If the user does not answer the question before the timeout, the default value is taken as answer. When the user touches or changes any widget in the dialog, the timeout is turned off and the dialog needs to be confirmed via . <timeout config:type="integer">30</timeout> |
Optional. A missing value is interpreted as |
|
|
You can run scripts to set the default value for a question (see
Section 4.32.1, “Default Value Scripts” for detailed
instructions about default value scripts). This feature is useful
if you can <default_value_script>...</default_value_script> |
Optional. Default is no script. |
You can run scripts to set the default value for a question. This
feature is useful if you can calculate a default
value, especially in combination with the timeout
option.
The elements listed below must be placed within the following XML structure:
<general>
<ask-list config:type="list">
<ask>
<default_value_script>
...
</default_value_script>
</ask>
</ask-list>
</general>|
Element |
Description |
Comment |
|---|---|---|
|
|
The source code of the script. Whatever you
<source>...</source> |
This value is required, otherwise nothing would be executed. |
|
|
The interpreter to use. <interpreter>perl</interpreter> |
The default value is |
You can run scripts after a question has been answered.
The elements listed below must be placed within the following XML structure:
<general>
<ask-list config:type="list">
<ask>
<script>
...
</script>
</ask>
</ask-list>
</general>|
Element |
Description |
Comment |
|---|---|---|
|
|
The file name of the script. <filename>my_ask_script.sh</filename> |
The default is ask_script.sh |
|
|
The source code of the script. Together with
<source>...</source> |
This value is required, otherwise nothing would be executed. |
|
|
A boolean that passes the value of the answer to the question as
an environment variable to the script. The variable is named
<environment config:type="boolean">true</environment> |
Optional. Default is |
|
|
A boolean that turns on feedback for the script execution. STDOUT will be displayed in a pop-up window that must be confirmed after the script execution. <feedback config:type="boolean">true</feedback> |
Optional, default is |
|
|
A boolean that turns on debugging for the script execution. <debug config:type="boolean">true</debug> |
Optional, default is |
|
|
A boolean that keeps the dialog open until the script has an exit
code of 0 (zero). So you can parse and check the answers the user
gave in the script and display an error with the
<rerun_on_error config:type="boolean">true</rerun_on_error> |
Optional, default is |
Below you can see an example of the usage of the ask
feature.
<general>
<ask-list config:type="list">
<ask>
<pathlist config:type="list">
<path>ldap,ldap_server</path>
</pathlist>
<stage>cont</stage>
<help>Choose your server depending on your department</help>
<selection config:type="list">
<entry>
<value>ldap1.mydom.de</value>
<label>LDAP for development</label>
</entry>
<entry>
<value>ldap2.mydom.de</value>
<label>LDAP for sales</label>
</entry>
</selection>
<default>ldap2.mydom.de</default>
<default_value_script>
<source> <![CDATA[
echo -n "ldap1.mydom.de"
]]>
</source>
</default_value_script>
</ask>
<ask>
<pathlist config:type="list">
<path>networking,dns,hostname</path>
</pathlist>
<question>Enter Hostname</question>
<stage>initial</stage>
<default>enter your hostname here</default>
</ask>
<ask>
<pathlist config:type="list">
<path>partitioning,0,partitions,0,filesystem</path>
</pathlist>
<question>File System</question>
<type>symbol</type>
<selection config:type="list">
<entry>
<value config:type="symbol">ext4</value>
<label>default File System (recommended)</label>
</entry>
<entry>
<value config:type="symbol">ext3</value>
<label>Fallback File System</label>
</entry>
</selection>
</ask>
</ask-list>
</general>
The following example shows a to choose between AutoYaST control files.
AutoYaST will read the modified.xml file again
after the ask-dialogs are done. This way you can fetch a complete new
control file.
<general>
<ask-list config:type="list">
<ask>
<selection config:type="list">
<entry>
<value>part1.xml</value>
<label>Simple partitioning</label>
</entry>
<entry>
<value>part2.xml</value>
<label>encrypted /tmp</label>
</entry>
<entry>
<value>part3.xml</value>
<label>LVM</label>
</entry>
</selection>
<title>XML Profile</title>
<question>Choose a profile</question>
<stage>initial</stage>
<default>part1.xml</default>
<script>
<filename>fetch.sh</filename>
<environment config:type="boolean">true</environment>
<source>
<![CDATA[
wget http://10.10.0.162/$VAL -O /tmp/profile/modified.xml 2>/dev/null
]]>
</source>
<debug config:type="boolean">false</debug>
<feedback config:type="boolean">false</feedback>
</script>
</ask>tion>
</ask-list>
</general>You can verify the answer of a question with a script like this:
<general>
<ask-list config:type="list">
<ask>
<script>
<filename>my.sh</filename>
<rerun_on_error config:type="boolean">true</rerun_on_error>
<environment config:type="boolean">true</environment>
<source><![CDATA[
if [ "$VAL" = "myhost" ]; then
echo "Illegal Hostname!";
exit 1;
fi
exit 0
]]>
</source>
<debug config:type="boolean">false</debug>
<feedback config:type="boolean">true</feedback>
</script>
<dialog config:type="integer">0</dialog>
<element config:type="integer">0</element>
<pathlist config:type="list">
<path>networking,dns,hostname</path>
</pathlist>
<question>Enter Hostname</question>
<default>enter your hostname here</default>
</ask>
</ask-list>
</general>This feature is not available on the IBM z Systems (s390x) architecture.
With Kdump the system can create crashdump files if the whole kernel crashes. Crash dump files contain the memory contents while the system crashed. Such core files can be analyzed later by support or a (kernel) developer to find the reason for the system crash. Kdump is mostly useful for servers where you cannot easily reproduce such crashes but it is important to get the problem fixed.
There is a downside to this. Enabling Kdump requires between 64 MB and 128 MB of additional system RAM reserved for Kdump in case the system crashes and the dump needs to be generated.
This section only describes how to set up Kdump with AutoYaST. It does not describe how Kdump works. For details, refer to the kdump(7) manual page.
The following example shows a general Kdump configuration.
<kdump>
<!-- memory reservation -->
<add_crash_kernel config:type="boolean">true</add_crash_kernel>
<crash_kernel>256M-:64M</crash_kernel>
<general>
<!-- dump target settings -->
<KDUMP_SAVEDIR>ftp://stravinsky.suse.de/incoming/dumps</KDUMP_SAVEDIR>
<KDUMP_COPY_KERNEL>true</KDUMP_COPY_KERNEL>
<KDUMP_FREE_DISK_SIZE>64</KDUMP_FREE_DISK_SIZE>
<KDUMP_KEEP_OLD_DUMPS>5</KDUMP_KEEP_OLD_DUMPS>
<!-- filtering and compression -->
<KDUMP_DUMPFORMAT>compressed</KDUMP_DUMPFORMAT>
<KDUMP_DUMPLEVEL>1</KDUMP_DUMPLEVEL>
<!-- notification -->
<KDUMP_NOTIFICATION_TO>tux@example.com</KDUMP_NOTIFICATION_TO>
<KDUMP_NOTIFICATION_CC>spam@example.com devnull@example.com</KDUMP_NOTIFICATION_CC>
<KDUMP_SMTP_SERVER>mail.example.com</KDUMP_SMTP_SERVER>
<KDUMP_SMTP_USER></KDUMP_SMTP_USER>
<KDUMP_SMTP_PASSWORD></KDUMP_SMTP_PASSWORD>
<!-- kdump kernel -->
<KDUMP_KERNELVER></KDUMP_KERNELVER>
<KDUMP_COMMANDLINE></KDUMP_COMMANDLINE>
<KDUMP_COMMANDLINE_APPEND></KDUMP_COMMANDLINE_APPEND>
<!-- expert settings -->
<KDUMP_IMMEDIATE_REBOOT>yes</KDUMP_IMMEDIATE_REBOOT>
<KDUMP_VERBOSE>15</KDUMP_VERBOSE>
<KEXEC_OPTIONS></KEXEC_OPTIONS>
</general>
</kdump>
The first step is to reserve memory for Kdump at boot-up. Because the
memory must be reserved very early during the boot process, the
configuration is done via a kernel command line parameter called
crashkernel. The reserved memory will be used to
load a second kernel which will be executed without rebooting if the
first kernel crashes. This second kernel has a special initrd, which
contains all programs necessary to save the dump over the network or to
disk, send a notification e-mail, and finally reboot.
To reserve memory for Kdump, specify the amount
(such as 64M to reserve 64 MB of memory from the
RAM) and the offset. The syntax is
crashkernel=AMOUNT@OFFSET. The kernel can
auto-detect the right offset (except for the Xen hypervisor, where
you need to specify 16M as offset). The amount of
memory that needs to be reserved depends on architecture and main
memory. Refer to
Section 17.7.1, “Manual Kdump Configuration” for recommendations on
the amount of memory to reserve for Kdump.
You can also use the extended command line syntax to specify the amount of reserved memory depending on the System RAM. That is useful if you share one AutoYaST control file for multiple installations or if you often remove or install memory on one machine. The syntax is:
BEGIN_RANGE_1-END_RANGE_1:AMOUNT_1,BEGIN_RANGE_2-END_RANGE_2:AMOUNT_2@OFFSET
BEGIN_RANGE_1 is the start of the first memory range
(for example: 0M) and END_RANGE_1
is the end of the first memory range (can be empty in case
infinity should be assumed) and so on. For example,
256M-2G:64M,2G-:128M reserves 64 MB of
crashkernel memory if the system has between 256 MB and 2 GB RAM and
reserves 128 MB of crashkernel memory if the system has more than 2 GB
RAM.
On the other hand, it is possible to specify multiple values for
crashkernel parameter. For example, when you need to reserve different
segments of low and high memory, use values like
72M,low and 256M,high:
<kdump>
<!-- memory reservation (high and low) -->
<add_crash_kernel config:type="boolean">true</add_crash_kernel>
<crash_kernel config:type="list">
<listentry>72M,low</listentry>
<listentry>256M,high</listentry>
</crash_kernel>
</kdump>The following table shows the settings necessary to reserve memory:
|
Element |
Description |
Comment |
|---|---|---|
|
|
Set to <add_crash_kernel config:type="boolean">true</add_crash_kernel> |
required |
|
|
Use the syntax of the crashkernel command line as discussed above. <crash_kernel>256M:64M</crash_kernel> A list of values is also supported. <crash_kernel config:type="list"> <listentry>72M,low</listentry> <listentry>256M,high</listentry> </crash_kernel> |
required |
The element KDUMP_SAVEDIR specifies the URL to
where the dump is saved. The following methods are possible:
file to save to the local disk,
ftp to save to an FTP server (without
encryption),
sftp to save to an SSH2 SFTP server,
nfs to save to an NFS location and
cifs to save the dump to a CIFS/SMP export from
Samba or Microsoft Windows.
For details see the kdump(5) manual page. Two examples are:
file:///var/crash (which is the default location
according to FHS) and
ftp://user:password@host:port/incoming/dumps. A
subdirectory, with the time stamp contained in the name, will be
created and the dumps saved there.
When the dump is saved to the local disk,
KDUMP_KEEP_OLD_DUMPS can be used to delete old
dumps automatically. Set it to the number of old dumps that should be
kept. If the target partition would end up with less free disk space
than specified in KDUMP_FREE_DISK_SIZE, the dump is
not saved.
To save the whole kernel and the debug information (if
installed) to the same directory, set
KDUMP_COPY_KERNEL to true. You
will have everything you need to analyze the dump in one directory
(except kernel modules and their debugging information).
The kernel dump is uncompressed and unfiltered. It can get as large as your system RAM. To get smaller files, compress the dump file afterward. The dump needs to be decompressed before opening.
To use page compression, which compresses every page and allows
dynamic decompression with the crash(8) debugging tool, set
KDUMP_DUMPFORMAT to compressed
(default).
You may not want to save all memory pages, for example those filled
with zeroes. To filter the dump, set the
KDUMP_DUMPLEVEL. 0 produces a full dump and 31 is
the smallest dump. The manual pages kdump(5) and makedumpfile(8) list
for each value which pages will be saved.
|
Element |
Description |
Comment |
|---|---|---|
|
|
A URL that specifies the target to which the dump and related files will be saved. <KDUMP_SAVEDIR>file:///var/crash/</KDUMP_SAVEDIR> |
required |
|
|
Set to <KDUMP_COPY_KERNEL>false</KDUMP_COPY_KERNEL> |
optional |
|
|
Disk space in megabytes that must remain free after saving the dump. If not enough space is available, the dump will not be saved. <KDUMP_FREE_DISK_SIZE>64</KDUMP_FREE_DISK_SIZE> |
optional |
|
|
The number of dumps that are kept (not deleted) if
<KDUMP_KEEP_OLD_DUMPS>4</KDUMP_KEEP_OLD_DUMPS> |
optional |
Configure e-mail notification if you want to be informed when a machine crashes and a dump is saved.
Because Kdump runs in the initrd, a local mail server cannot send the notification e-mail. An SMTP server needs to be specified (see below).
You need to provide exactly one address in
KDUMP_NOTIFICATION_TO. More addresses can be
specified in KDUMP_NOTIFICATION_CC. Only use e-mail
addresses in both cases, not a real name.
Specify KDUMP_SMTP_SERVER and (if the server needs
authentication) KDUMP_SMTP_USER and
KDUMP_SMTP_PASSWORD. Support for TLS/SSL is not
available but may be added in the future.
|
Element |
Description |
Comment |
|---|---|---|
|
|
Exactly one e-mail address to which the e-mail should be sent.
Additional recipients can be specified in
<KDUMP_NOTIFICATION_TO >tux@example.com</KDUMP_NOTIFICATION_TO> |
optional (notification disabled if empty) |
|
|
Zero, one or more recipients that are in the cc line of the notification e-mail. <KDUMP_NOTIFICATION_CC >wilber@example.com geeko@example.com</KDUMP_NOTIFICATION_CC> |
optional |
|
|
Host name of the SMTP server used for mail delivery. SMTP
authentication is supported (see
<KDUMP_SMTP_SERVER>email.suse.de</KDUMP_SMTP_SERVER> |
optional (notification disabled if empty) |
|
|
User name used together with
<KDUMP_SMTP_USER>bwalle</KDUMP_SMTP_USER> |
optional |
|
|
Password used together with <KDUMP_SMTP_PASSWORD>geheim</KDUMP_SMTP_PASSWORD> |
optional |
As already mentioned, a special kernel is booted to save the dump. If
you do not want to use the auto-detection mechanism to find out which
kernel is used (see the kdump(5) manual page that describes the
algorithm which is used to find the kernel), you can specify the
version of a custom kernel in KDUMP_KERNELVER. If
you set it to foo, then the kernel located in
/boot/vmlinuz-foo or
/boot/vmlinux-foo (in that order on platforms that
have a vmlinuz file) will be used.
You can specify the command line used to boot the Kdump kernel.
Normally the boot command line is used, minus settings that are not
relevant for Kdump (like the crashkernel parameter)
plus some settings needed by Kdump (see the manual page kdump(5)). To
specify additional parameters, use KDUMP_COMMANDLINE_APPEND. If you
know what you are doing and you want to specify the entire command line,
set KDUMP_COMMANDLINE.
|
Element |
Description |
Comment |
|---|---|---|
|
|
Version string for the kernel used for Kdump. Leave it empty to use the auto-detection mechanism (strongly recommended). <KDUMP_KERNELVER >2.6.27-default</KDUMP_KERNELVER> |
optional (auto-detection if empty) |
|
|
Additional command line parameters for the Kdump kernel. <KDUMP_COMMANDLINE_APPEND >console=ttyS0,57600</KDUMP_COMMANDLINE_APPEND> |
optional |
|
|
Overwrite the automatically generated Kdump command line. Use with
care. Usually, <KDUMP_COMMANDLINE_APPEND >root=/dev/sda5 maxcpus=1 irqpoll</KDUMP_COMMANDLINE> |
optional |
|
Element |
Description |
Comment |
|---|---|---|
|
|
<KDUMP_IMMEDIATE_REBOOT >true</KDUMP_IMMEDIATE_REBOOT> |
optional |
|
|
Bitmask that specifies how verbose the Kdump process should be. Read kdump(5) for details. <KDUMP_VERBOSE>3</KDUMP_VERBOSE> |
optional |
|
|
Additional options that are passed to kexec when loading the Kdump kernel. Normally empty. <KEXEC_OPTIONS>--noio</KEXEC_OPTIONS> |
optional |
The Bind DNS server can be configured by adding a dns-server
resource. The three more straightforward properties of that resource can
have a value of 1 to enable them or 0 to disable.
|
Attribute |
Value |
Description |
|---|---|---|
|
|
0 / 1 |
The DNS server must be jailed in a chroot. |
|
|
0 / 1 |
Bind is enabled (executed on system start). |
|
|
0 / 1 |
Store the settings in LDAP instead of native configuration files. |
<dns-server> <chroot>0</chroot> <start_service>1</start_service> <use_ldap>0</use_ldap> </dns-server>
In addition to those basic settings, there are three properties of type list that can be used to fine-tune the service configuration.
|
List |
Description |
|---|---|
|
|
Options of the DNS server logging. |
|
|
Bind options like the files and directories to use, the list of forwarders and other configuration settings. |
|
|
List of DNS zones known by the server, including all the settings, records and SOA records. |
<dns-server>
<logging config:type="list">
<listentry>
<key>channel</key>
<value>log_syslog { syslog; }</value>
</listentry>
</logging>
<options config:type="list">
<option>
<key>forwarders</key>
<value>{ 10.10.0.1; }</value>
</option>
</options>
<zones config:type="list">
<listentry>
<is_new>1</is_new>
<modified>1</modified>
<options config:type="list"/>
<records config:type="list">
<listentry>
<key>mydom.uwe.</key>
<type>MX</type>
<value>0 mail.mydom.uwe.</value>
</listentry>
<listentry>
<key>mydom.uwe.</key>
<type>NS</type>
<value>ns.mydom.uwe.</value>
</listentry>
</records>
<soa>
<expiry>1w</expiry>
<mail>root.aaa.aaa.cc.</mail>
<minimum>1d</minimum>
<refresh>3h</refresh>
<retry>1h</retry>
<serial>2005082300</serial>
<server>aaa.aaa.cc.</server>
<zone>@</zone>
</soa>
<soa_modified>1</soa_modified>
<ttl>2d</ttl>
<type>master</type>
<update_actions config:type="list">
<listentry>
<key>mydom.uwe.</key>
<operation>add</operation>
<type>NS</type>
<value>ns.mydom.uwe.</value>
</listentry>
</update_actions>
<zone>mydom.uwe</zone>
</listentry>
</zones>
</dns-server>
The dhcp-server resource makes it possible to configure
all the settings of a DHCP server by means of the six following properties.
|
Element |
Value |
Description |
|---|---|---|
|
|
0 / 1 |
A value of 1 means that the DHCP server must be jailed in a chroot. |
|
|
0 / 1 |
Set this to 1 to enable the DHCP server (that is, run it on system startup). |
|
|
0 / 1 |
If set to 1, the settings will be stored in LDAP instead of native configuration files. |
|
|
Text |
String with parameters that will be passed to the DHCP server executable when started. For example, use "-p 1234" to listen on a non-standard 1234 port. For all possible options, consult the dhcpd manual page. If left blank, default values will be used. |
|
|
List |
List of network cards in which the DHCP server will be operating. See the example below for the exact format. |
|
|
List |
List of settings to configure the behavior of the DHCP server. The
configuration is defined in a tree-like structure where the root
represents the global options, with subnets and host nested from there.
The |
<dhcp-server>
<allowed_interfaces config:type="list">
<allowed_interface>eth0</allowed_interface>
</allowed_interfaces>
<chroot>0</chroot>
<other_options>-p 9000</other_options>
<start_service>1</start_service>
<use_ldap>0</use_ldap>
<settings config:type="list">
<settings_entry>
<children config:type="list"/>
<directives config:type="list">
<listentry>
<key>fixed-address</key>
<type>directive</type>
<value>192.168.0.10</value>
</listentry>
<listentry>
<key>hardware</key>
<type>directive</type>
<value>ethernet d4:00:00:bf:00:00</value>
</listentry>
</directives>
<id>static10</id>
<options config:type="list"/>
<parent_id>192.168.0.0 netmask 255.255.255.0</parent_id>
<parent_type>subnet</parent_type>
<type>host</type>
</settings_entry>
<settings_entry>
<children config:type="list">
<child>
<id>static10</id>
<type>host</type>
</child>
</children>
<directives config:type="list">
<listentry>
<key>range</key>
<type>directive</type>
<value>dynamic-bootp 192.168.0.100 192.168.0.150</value>
</listentry>
<listentry>
<key>default-lease-time</key>
<type>directive</type>
<value>14400</value>
</listentry>
<listentry>
<key>max-lease-time</key>
<type>directive</type>
<value>86400</value>
</listentry>
</directives>
<id>192.168.0.0 netmask 255.255.255.0</id>
<options config:type="list"/>
<parent_id/>
<parent_type/>
<type>subnet</type>
</settings_entry>
<settings_entry>
<children config:type="list">
<child>
<id>192.168.0.0 netmask 255.255.255.0</id>
<type>subnet</type>
</child>
</children>
<directives config:type="list">
<listentry>
<key>ddns-update-style</key>
<type>directive</type>
<value>none</value>
</listentry>
<listentry>
<key>default-lease-time</key>
<type>directive</type>
<value>14400</value>
</listentry>
</directives>
<id/>
<options config:type="list"/>
<parent_id/>
<parent_type/>
<type/>
</settings_entry>
</settings>
</dhcp-server>
SuSEfirewall2 has been replaced by firewalld since
SLES 15. Profiles using SuSEfirewall2 properties will be translated
to firewalld profiles. However, not all profile
properties can be converted. For details about firewalld refer to
Section 15.4, “firewalld”.
The use of SuSEFirewall2 based profiles will be only
partially supported as many options are not valid in firewalld and
some missing configuration could affect your network security.
In firewalld the general configuration only exposes few
properties and most of the configuration is done by zones.
|
Attribute |
Value |
Description |
|---|---|---|
|
|
Boolean |
Whether |
|
|
Boolean |
Whether |
|
|
Zone name |
The default zone is used for everything that is not explicitely assigned. |
|
|
Type of dropped packages to be logged |
Enable logging of dropped packages for the type selected. Values: off, unicast, multicast, broadcast, all |
The configuration of firewalld is based on the existence of several zones
which define the trust level for a connection, interface or source address.
The behavior of each zone can be tweaked in several ways although not all
the properties are exposed yet.
|
Attributes |
Value |
Description |
|---|---|---|
|
|
List of interfaces names. |
List of interface names assigned to this zone. Interfaces or sources can only be part of one zone. |
|
|
List of services |
List of services accesibles in this zone. |
|
|
List of ports |
List of single ports or ranges to be opened in the assigned zone. |
|
|
List of protocols |
List of protocols to be opened or be accessible in the assigned zone. |
|
|
Enable masquerade |
It will enable or disable network address translation (NAT) in the assigned zone |
A full example of the firewall section, including general and zone specific properties could look like this.
<firewall>
<enable_firewall>true</enable_firewall>
<log_denied_packets>all</log_denied_packets>
<default_zone>external</default_zone>
<zones>
<zone>
<name>public</name>
<interfaces>
<interface>eth0</interface>
</interfaces>
<services config:type="list">
<service>ssh</service>
<service>dhcp</service>
<service>dhcpv6</service>
<service>samba</service>
<service>vnc-server</service>
</services>
<ports config:type="list">
<port>21/udp</port>
<port>22/udp</port>
<port>80/tcp</port>
<port>443/tcp</port>
<port>8080/tcp</port>
</ports>
</zone>
<zone>
<name>dmz</name>
<interfaces>
<interface>eth1</interface>
</interfaces>
</zone>
</zones>
</firewall>In addition to the core component configuration, like network authentication and security, AutoYaST offers a wide range of hardware and system configuration options, the same as available by default on any system installed manually and in an interactive way. For example, it is possible to configure printers, sound devices, TV cards and any other hardware components which have a module within YaST.
Any new configuration options added to YaST will be automatically available in AutoYaST.
AutoYaST support for printing is limited to basic settings defining how CUPS is used on a client for printing via the network.
There is no AutoYaST support for setting up local print queues. Modern
printers are usually connected via USB. CUPS accesses USB printers by a
model-specific device URI like
usb://ACME/FunPrinter?serial=1a2b3c. Usually it is
not possible to predict the correct USB device URI in advance, because
it is determined by the CUPS back-end usb during
runtime. Therefore it is not possible to set up local print queues with
AutoYaST.
Basics on how CUPS is used on a client workstation to print via network:
On client workstations application programs submit print jobs to the
CUPS daemon process (cupsd).
cupsd forwards the print jobs to a CUPS print
server in the network where the print jobs are processed. The server
sends the printer specific data to the printer device.
If there is only a single CUPS print server in the network, there is no
need to have a CUPS daemon running on each client workstation. Instead
it is simpler to specify the CUPS server in
/etc/cups/client.conf and access it directly (only
one CUPS server entry can be set). In this case application programs
that run on client workstations submit print jobs directly to the
specified CUPS print server.
Example 4.61, “Printer configuration” shows a printer
configuration section. The cupsd_conf_content entry
contains the whole verbatim content of the
cupsd configuration file
/etc/cups/cupsd.conf. The
client_conf_content entry contains the whole
verbatim content of /etc/cups/client.conf. The
printer section contains the
cupsd configuration but it does
not specify whether the cupsd should run.
<printer>
<client_conf_content>
<file_contents><![CDATA[
... verbatim content of /etc/cups/client.conf ...
]]></file_contents>
</client_conf_content>
<cupsd_conf_content>
<file_contents><![CDATA[
... verbatim content of /etc/cups/cupsd.conf ...
]]></file_contents>
</cupsd_conf_content>
</printer>/etc/cups/cups-files.conf
With release 1.6 the CUPS configuration file has been split into two
files: cupsd.conf and
cups-files.conf. As of openSUSE Leap 42.3,
AutoYaST only supports modifying cupsd.conf since
the default settings in cups-files.conf are
sufficient for usual printing setups.
An example of the sound configuration created using the configuration system is shown below.
<sound>
<autoinstall config:type="boolean">true</autoinstall>
<modules_conf config:type="list">
<module_conf>
<alias>snd-card-0</alias>
<model>M5451, ALI</model>
<module>snd-ali5451</module>
<options>
<snd_enable>1</snd_enable>
<snd_index>0</snd_index>
<snd_pcm_channels>32</snd_pcm_channels>
</options>
</module_conf>
</modules_conf>
<volume_settings config:type="list">
<listentry>
<Master config:type="integer">75</Master>
</listentry>
</volume_settings>
</sound>YaST allows to import SSH keys and server configuration from previous installations. The behavior of this feature can also be controlled through an AutoYaST profile.
<ssh_import> <import config:type="boolean">true</import> <copy_config config:type="boolean">true</copy_config> <device>/dev/sda2</device> </ssh_import>
|
Attributes |
Value |
Description |
|---|---|---|
|
|
true / false |
SSH keys will be imported. If set to
|
|
|
true / false |
Additionally, SSH server configuration will be imported.
This setting will not have effect if
|
|
|
Partition |
Partition to import keys and configuration from. If it is not set, the partition which contains the most recently accessed key is used. |
AutoYaST allows delegating part of the configuration to a configuration management tool like Salt:
AutoYaST takes care of system installation (partitioning, network setup, etc.)
System configuration can be delegated to a configuration management tool
This module configures the connection to a configuration management tool and uploads SSH keys which are needed for establishing connections. At the end of the installation, the configuration management Master will be contacted to retrieve state files and other resources.
<configuration_management>
<type>salt</type>
<master>linux-addc</master>
<auth_attempts config:type="integer">5</auth_attempts>
<auth_time_out config:type="integer">10</auth_time_out>
<keys_url>http://keys.example.de/keys</keys_url>
</configuration_management>|
Attributes |
Value |
Description |
|---|---|---|
|
|
Configuration management type |
Configuration management name. Currently only |
|
|
Host name |
Host name or IP address of the configuration management master. |
|
|
Integer |
At the end of installation, YaST connects to the configuration
management master with maximum |
|
|
Integer |
Time between the configuration management master connection attempts. The default is 15 seconds. |
|
|
URL of used key |
Path to an HTTP server, hard disk, USB drive or similar with
the files |
|
|
True/false |
Enables the configuration management services on the client
side. Default is |
Rules and classes allow customizing installations for sets of machines in different ways:
Rules allow configuring a system depending on its attributes.
Classes represent configurations for groups of target systems. Classes can be assigned to systems.
autoyast Boot Option Only
Rules and classes are only supported by the boot option
autoyast=URL.
autoyast2=URL is not
supported, because
this option downloads a single AutoYaST control file only.
Rules offer the possibility to configure a system depending on system attributes by merging multiple control files during installation. The rules-based installation is controlled by a rules file.
For example, this could be useful to install systems in two departments in one go. Assume a scenario where machines in department A need to be installed as office desktops, whereas machines in department B need to be installed as developer workstations. You would create a rules file with two different rules. For each rule, you could use different system parameters to distinguish the installations from one another. Each rule would also contain a link to an appropriate profile for each department.
The rules file is an XML file containing rules for each group of systems (or single systems) that you want to automatically install. A set of rules distinguish a group of systems based on one or more system attributes. After passing all rules, each group of systems is linked to a control file. Both the rules file and the control files must be located in a pre-defined and accessible location.
The rules file is retrieved only if no specific control file is supplied
using the autoyast keyword. For example, if the
following is used, the rules file will not be evaluated:
autoyast=http://10.10.0.1/profile/myprofile.xml autoyast=http://10.10.0.1/profile/rules/rules.xml
Instead use:
autoyast=http://10.10.0.1/profile/
which will load
http://10.10.0.1/profile/rules/rules.xml (the slash
at the end of the directory name is important).
If more than one rule applies, the final control file for each group is generated on the fly using a merge script. The merging process is based on the order of the rules and later rules override configuration data in earlier rules. Note that the names of the top sections in the merged xml files need to be in alphabetical order for the merge to succeed.
The use of a rules file is optional. If the rules file is not found, system installation proceeds in the standard way by using the supplied control file or by searching for the control file depending on the MAC or the IP address of the system.
The following simple example illustrates how the rules file is used to retrieve the configuration for a client with known hardware.
<?xml version="1.0"?>
<!DOCTYPE autoinstall>
<autoinstall xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
<rules config:type="list">
<rule>
<disksize>
<match>/dev/sdc 1000</match>
<match_type>greater</match_type>
</disksize>
<result>
<profile>department_a.xml</profile>
<continue config:type="boolean">false</continue>
</result>
</rule>
<rule>
<disksize>
<match>/dev/sda 1000</match>
<match_type>greater</match_type>
</disksize>
<result>
<profile>department_b.xml</profile>
<continue config:type="boolean">false</continue>
</result>
</rule>
</rules>
</autoinstall>
The last example defines two rules and provides a different control
file for every rule. The rule used in this case is
disksize. After parsing the rules file, YaST
attempts to match the target system with the rules in the
rules.xml file. A rule match occurs when the
target system matches all system attributes defined in the rule. When the system matches a rule, the respective resource is added to
the stack of control files AutoYaST will use to create the final control
file. The continue property tells AutoYaST whether it
should continue with other rules after a match has been found.
If the first rule does not match, the next rule in the list is examined until a match is found.
Using the disksize attribute, you can provide
different configurations for systems with hard disks of different
sizes. The first rule checks if the device
/dev/sdc is available and if it is greater than 1
GB in size using the match property.
A rule must have at least one attribute to be matched. If you need to check more attributes, such as memory or architectures, you can add more attributes in the rule resource as shown in the next example.
The following example illustrates how the rules file is used to retrieve the configuration for a client with known hardware.
<?xml version="1.0"?>
<!DOCTYPE autoinstall>
<autoinstall xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
<rules config:type="list">
<rule>
<disksize>
<match>/dev/sdc 1000</match>
<match_type>greater</match_type>
</disksize>
<memsize>
<match>1000</match>
<match_type>greater</match_type>
</memsize>
<result>
<profile>department_a.xml</profile>
<continue config:type="boolean">false</continue>
</result>
</rule>
<rule>
<disksize>
<match>/dev/shda 1000</match>
<match_type>greater</match_type>
</disksize>
<memsize>
<match>256</match>
<match_type>greater</match_type>
</memsize>
<result>
<profile>department_b.xml</profile>
<continue config:type="boolean">false</continue>
</result>
</rule>
</rules>
</autoinstall>
The rules directory must be located in the same directory specified via
the autoyast keyword at boot time. If the client was
booted using autoyast=http://10.10.0.1/profiles/,
AutoYaST will search for the rules file at
http://10.10.0.1/profiles/rules/rules.xml.
If the attributes AutoYaST provides for rules are not enough for your purposes, use custom rules. Custom rules contain a shell script. The output of the script (STDOUT, STDERR is ignored) can be evaluated.
Here is an example for the use of custom rules:
<rule>
<custom1>
<script>
if grep -i intel /proc/cpuinfo > /dev/null; then
echo -n "intel"
else
echo -n "non_intel"
fi;
</script>
<match>*</match>
<match_type>exact</match_type>
</custom1>
<result>
<profile>@custom1@.xml</profile>
<continue config:type="boolean">true</continue>
</result>
</rule>
The script in this rule can echo either intel or
non_intel to STDOUT (the output of the grep command
must be directed to /dev/null in this case). The output of the rule
script will be filled between the two '@' characters, to determine the
file name of the control file to fetch. AutoYaST will read the output and
fetch a file with the name intel.xml or
non_intel.xml. This file can contain the AutoYaST
profile part for the software selection; for example, in case you want
a different software selection on intel hardware than on others.
The number of custom rules is limited to five. So you can use
custom1 to custom5.
You can use five different match_types:
exact (default)
greater
lower
range
regex (a simple =~ operator
like in Bash)
If using exact, the string must match exactly as
specified. regex can be used to match substrings
like ntel will match Intel, intel and intelligent.
greater and lower can be used for
memsize or totaldisk for example. They can match only with rules that
return an integer value. A range is only possible for integer values
too and has the form of value1-value2, for example
512-1024.
Multiple attributes can be combined via a logical operator. It is
possible to let a rule match if disksize is greater than 1GB or memsize
is exactly 512MB.
You can do this with the operator element in the
rules.xml file. and and or are
possible operators, and being the default. Here is
an example:
<rule>
<disksize>
<match>/dev/sda 1000</match>
<match_type>greater</match_type>
</disksize>
<memsize>
<match>256</match>
<match_type>greater</match_type>
</memsize>
<result>
<profile>machine2.xml</profile>
<continue config:type="boolean">false</continue>
</result>
<operator>or</operator>
</rule>
The rules.xml file needs to:
have at least one rule,
have the name rules.xml,
be located in the directory rules in the profile
repository,
have at least one attribute to match in the rule.
The following table lists the predefined system attributes you can match in the rules file.
If you are unsure about a value on your system, run
/usr/lib/YaST/bin/y2base ayast_probe ncurses.
The text box displaying the detected values can be scrolled. Note that
this command will not work while another YaST process that
requires a lock (for example the installer) is running. Therefore you
cannot run it during the installation.
|
Attribute |
Values |
Description |
|---|---|---|
|
|
IP address of the host |
This attribute must always match exactly. |
|
|
The name of the host |
This attribute must always match exactly. |
|
|
Domain name of host |
This attribute must always match exactly. |
|
|
The name of the product to be installed. |
This attribute must always match exactly. |
|
|
The version of the product to be installed. |
This attribute must always match exactly. |
|
|
network address of host |
This attribute must always match exactly. |
|
|
MAC address of host |
This attribute must always match exactly (the MAC addresses should
have the form |
|
|
Number of installed Linux partitions on the system |
This attribute can be 0 or more. |
|
|
Number of installed non-Linux partitions on the system |
This attribute can be 0 or more. |
|
|
X Server needed for graphic adapter |
This attribute must always match exactly. |
|
|
Memory available on host in MBytes |
All match types are available. |
|
|
Total disk space available on host in MBytes |
All match types are available. |
|
|
Hex representation of the IP address |
Exact match required |
|
|
Architecture of host |
Exact match required |
|
|
Kernel Architecture of host (for example SMP kernel, Xen kernel) |
Exact match required |
|
|
Drive device and size |
All match types are available. |
|
|
The hardware product name as specified in SMBIOS |
Exact match required |
|
|
The hardware vendor as specified in SMBIOS |
Exact match required |
|
|
The system board name as specified in SMBIOS |
Exact match required |
|
|
The system board vendor as specified in SMBIOS |
Exact match required |
|
|
Custom rules using shell scripts |
All match types are available. |
You can use dialog pop-ups with check boxes to select rules you want matched.
The elements listed below must be placed within the following XML
structure in the rules.xml file:
<rules config:type="list">
<rule>
<dialog>
...
</dialog>
</rule>
</rules>|
Attribute |
Values |
Description |
|---|---|---|
|
|
All rules with the same <dialog_nr config:type="integer">3</dialog_nr> |
This element is optional and the default for a missing dialog_nr
is always |
|
|
Specify a unique ID. Even if you have more than one dialog, you
must not use the same id twice. Using id <element config:type="integer">3</element> |
Optional. If left out, AutoYaST adds its own ids internally. Then you cannot specify conflicting rules (see below). |
|
|
Caption of the pop-up dialog <title>Desktop Selection</title> |
Optional |
|
|
Question shown in the pop-up behind the check box. <question>GNOME Desktop</question> |
Optional. If you do not configure a text here, the name of the XML file that is triggered by this rule will be shown instead. |
|
|
Timeout in seconds after which the dialog will automatically “press” the okay button. Useful for a non-blocking installation in combination with rules dialogs. <timeout config:type="integer">30</timeout> |
Optional. A missing timeout will stop the installation process until the dialog is confirmed by the user. |
|
|
A list of element ids (rules) that conflict with this rule. If this rule matches or is selected by the user, all conflicting rules are deselected and disabled in the pop-up. Take care that you do not create deadlocks. <conflicts config:type="list"> <element config:type="integer">1</element> <element config:type="integer">5</element> ... </conflicts> |
|
Here is an example of how to use dialogs with rules:
<rules config:type="list">
<rule>
<custom1>
<script>
echo -n 100
</script>
<match>100</match>
<match_type>exact</match_type>
</custom1>
<result>
<profile>rules/gnome.xml</profile>
<continue config:type="boolean">true</continue>
</result>
<dialog>
<element config:type="integer">0</element>
<question>GNOME Desktop</question>
<title>Desktop Selection</title>
<conflicts config:type="list">
<element config:type="integer">1</element>
</conflicts>
<dialog_nr config:type="integer">0</dialog_nr>
</dialog>
</rule>
<rule>
<custom1>
<script>
echo -n 100
</script>
<match>101</match>
<match_type>exact</match_type>
</custom1>
<result>
<profile>rules/gnome.xml</profile>
<continue config:type="boolean">true</continue>
</result>
<dialog>
<element config:type="integer">1</element>
<dialog_nr config:type="integer">0</dialog_nr>
<question>Gnome Desktop</question>
<conflicts config:type="list">
<element config:type="integer">0</element>
</conflicts>
</dialog>
</rule>
<rule>
<custom1>
<script>
echo -n 100
</script>
<match>100</match>
<match_type>exact</match_type>
</custom1>
<result>
<profile>rules/all_the_rest.xml</profile>
<continue config:type="boolean">false</continue>
</result>
</rule>
</rules>Classes represent configurations for groups of target systems. Unlike rules, classes need to be configured in the control file. Then classes can be assigned to target systems.
Here is an example of a class definition:
<classes config:type="list">
<class>
<class_name>TrainingRoom</class_name>
<configuration>Software.xml</configuration>
</class>
</classes>
In the example above, the file Software.xml must be
placed in the subdirectory classes/TrainingRoom/ It
will be fetched from the same place the AutoYaST control file and rules
were fetched from.
If you have multiple control files and those control files share parts, better use classes for common parts. You can also use XIncludes.
Using the configuration management system, you can define a set of classes. A class definition consists of the following variables:
Name: class name
Description:
Order: order (or priority) of the class in the stack of migration
You can create as many classes as you need, however it is recommended to keep the set of classes as small as possible to keep the configuration system concise. For example, the following sets of classes can be used:
site: classes describing a physical location or site,
machine: classes describing a type of machine,
role: classes describing the function of the machine,
group: classes describing a department or a group within a site or a location.
A file saved in a class directory can have the same syntax and format as a regular control file but represents a subset of the configuration. For example, to create a new control file for a computer with a specific network interface, you only need the control file resource that controls the configuration of the network. Having multiple network types, you can merge the one needed for a special type of hardware with other class files and create a new control file which suits the system being installed.
It is possible to mix rules and classes during an auto-installation session. For example you can identify a system using rules which contain class definitions in them. The process is described in the figure Figure A.1, “Rules Retrieval Process”.
After retrieving the rules and merging them, the generated control file is parsed and checked for class definitions. If classes are defined, then the class files are retrieved from the original repository and a new merge process is initiated.
With classes and with rules, multiple XML files get merged into one resulting XML file. This merging process is often confusing for people, because it behaves different than one would expect. First of all, it is important to note that the names of the top sections in the merged XML files must to be in alphabetical order for the merge to succeed.
For example, the following two XML parts should be merged:
<partitioning config:type="list">
<drive>
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">swap</filesystem>
<format config:type="boolean">true</format>
<mount>swap</mount>
<partition_id config:type="integer">130</partition_id>
<size>2000mb</size>
</partition>
<partition>
<filesystem config:type="symbol">xfs</filesystem>
<partition_type>primary</partition_type>
<size>4Gb</size>
<mount>/data</mount>
</partition>
</partitions>
</drive>
</partitioning><partitioning config:type="list">
<drive>
<initialize config:type="boolean">false</initialize>
<partitions config:type="list">
<partition>
<format config:type="boolean">true</format>
<filesystem config:type="symbol">xfs</filesystem>
<mount>/</mount>
<partition_id config:type="integer">131</partition_id>
<partition_type>primary</partition_type>
<size>max</size>
</partition>
</partitions>
<use>all</use>
</drive>
</partitioning>
You might expect the control file to contain 3 partitions. This is not
the case. You will end up with two partitions and the first partition is
a mix up of the swap and the root partition. Settings configured in both
partitions, like mount or size,
will be used from the second file. Settings that only exist in the first
or second partition, will be copied to the merged partition too.
In this example, you do not want a second drive. The
two drives should be merged into one. With regard to partitions, three
separate ones should be defined. Using the dont_merge
method solves the merging problem:
<classes config:type="list">
<class>
<class_name>swap</class_name>
<configuration>largeswap.xml</configuration>
<dont_merge config:type="list">
<element>partition</element>
</dont_merge>
</class>
</classes><rule>
<board_vendor>
<match>ntel</match>
<match_type>regex</match_type>
</board_vendor>
<result>
<profile>classes/largeswap.xml</profile>
<continue config:type="boolean">true</continue>
<dont_merge config:type="list">
<element>partition</element>
</dont_merge>
</result>
<board_vendor>
<match>PowerEdge [12]850</match>
<match_type>regex</match_type>
</board_vendor>
<result>
<profile>classes/smallswap.xml</profile>
<continue config:type="boolean">true</continue>
<dont_merge config:type="list">
<element>partition</element>
</dont_merge>
</result>
</rule>After the system has booted into an automatic installation and the control file has been retrieved, YaST configures the system according to the information provided in the control file. All configuration settings are summarized in a window that is shown by default and should be deactivated if a fully automatic installation is needed.
By the time YaST displays the summary of the configuration, YaST has only probed hardware and prepared the system for auto-installation. Nothing has been changed in the system yet. In case of any error, you can still abort the process.
A system should be automatically installable without the need to have any graphic adapter or monitor. Having a monitor attached to the client machine is nevertheless recommended so you can supervise the process and to get feedback in case of errors. Choose between the graphical and the text-based Ncurses interfaces. For headless clients, system messages can be monitored using the serial console.
This is the default interface while auto-installing. No special variables are required to activate it.
Start installing a system using the serial console by adding the
keyword console (for example
console=ttyS0) to the command line of the kernel.
This starts linuxrc in console mode and later YaST in serial
console mode.
This option can also be activated on the command line. To start
YaST in text mode, add textmode=1 on the
command line.
Starting YaST in the text mode is recommended when installing a client with less than 64 MB or when X11 should not be configured, especially on headless machines.
There are different methods for booting the client. The computer can boot from its network interface card (NIC) to receive the boot images via DHCP or TFTP. Alternatively a suitable kernel and initrd image can be loaded from a flash disk or a bootable DVD-ROM.
YaST will check for autoinst.xml in the root
directory of the boot medium or the initrd upon start-up and switch to
an automated installation if it was found. In case the control file is
named differently or located elsewhere, specify its location on the
kernel command line with the parameter
AutoYaST=URL.
For testing/rescue purposes or because the NIC does not have a PROM or PXE you can build a bootable flash disk to use with AutoYaST. Flash disks can also store the control file.
To create a bootable flash disk, copy either the
openSUSE Leap iso image of DVD1 or the Mini CD ISO image to the
disk using the dd command (the flash disk must not be
mounted, all data on the device will be erased):
tux >sudodd if=PATH_TO_ISO_IMAGE of=USB_STORAGE_DEVICE bs=4M
You can use the original openSUSE Leap DVD-ROM number one in combination with other media. For example, the control file can be provided via a flash disk or a specified location on the network. Alternatively, create a customized DVD-ROM that includes the control file.
Booting via PXE requires a DHCP and a TFTP server in your network. The computer will then boot without a physical medium.
If you do installation via PXE, the installation will run into an endless loop. This happens because after the first reboot, the machine performs the PXE boot again and restarts the installation instead of booting from the hard disk for the second stage of the installation.
There are several ways to solve this problem. You can use an HTTP server to provide the AutoYaST control file. Alternatively, instead of a static control file, run a CGI script on the Web server that provides the control file and changes the TFTP server configuration for your target host. This way, the next PXE boot of the machine will be from the hard disk by default.
Another way is to use AutoYaST to upload a new PXE boot configuration for the target host via the control file:
<pxe>
<pxe_localboot config:type="boolean">true</pxe_localboot>
<pxelinux-config>
DEFAULT linux
LABEL linux
localboot 0
</pxelinux-config>
<tftp-server>192.168.1.115</tftp-server>
<pxelinux-dir>/pxelinux.cfg</pxelinux-dir>
<filename>__MAC__</filename>
</pxe>
This entry will upload a new configuration for the target host to the
TFTP server shortly before the first reboot happens. In most
installations the TFTP daemon runs as user
nobody. You need to make sure
this user has write permissions to the
pxelinux.cfg directory. You can also configure the
file name that will be uploaded. If you use the “magic”
__MAC__ file name, the file name will be the MAC
address of your machine like, for example
01-08-00-27-79-49-ee. If the file name setting is
missing, the IP address will be used for the file name.
To do another auto-installation on the same machine, you need to remove the file from the TFTP server.
Adding the command line variable autoyast causes
linuxrc to start in automated mode. linuxrc searches for a
configuration file, which should be distinguished from the main control
file in the following places:
in the root directory of the initial RAM disk used for booting the system,
in the root directory of the boot medium
The configuration file used by linuxrc can have the following keywords (for a detailed description of how linuxrc works and other keywords, see Appendix C, Advanced Linuxrc Options):
|
Keyword |
Value |
|---|---|
|
|
Initiate an automatic upgrade using AutoYaST. Also requires the
|
|
|
Location of the control file for automatic installation, see Table 6.2, “Command Line Variables for AutoYaST” for details. |
|
|
Location of the control file for automatic installation.
Similar to |
|
|
Configure and start the network. Required if the AutoYaST is to be fetched from a remote location. See Section C.3, “Advanced Network Setup” for details. |
|
|
Kernel modules to load |
|
|
Location of the installation directory, for example
|
|
|
Installation mode, for example |
|
|
Server (NFS) to contact for source directory |
|
|
Directory on NFS Server |
|
|
Even with <confirm>no</confirm> in the control file, the confirm proposal comes up. |
These variables and keywords will bring the system up to the point where YaST can take over with the main control file. Currently, the source medium is automatically discovered, which in some cases makes it possible to initiate the auto-install process without giving any instructions to linuxrc.
The traditional linuxrc configuration file (info)
has the function of giving the client enough information about the
installation server and the location of the sources. Usually, this file
is not required; but it is needed in special network environments
where DHCP and BOOTP are not used or when special kernel modules need
to be loaded.
All linuxrc keywords can be passed to linuxrc using the kernel command line. The command line can also be set when creating network bootable images or it can be passed to the kernel using a specially configured DHCP server in combination with Etherboot or PXE.
The command line variable autoyast can be used in
the format described in table
“Table 6.2, “Command Line Variables for AutoYaST””
|
Command line variable |
Description |
|---|---|
|
|
Default auto-installation option. |
|
|
Looks for control file in specified path (relative to the source
root directory, for example
|
|
|
Looks for control file on a storage device. Do not specify the
full path to the device, but the device name only. You may also omit specifying the device and trigger
AutoYaST to search all devices
( |
|
|
Looks for control file on an NFS server. |
|
|
Retrieves the control file from a Web server using the HTTP protocol. Specifying a user name and a password is optional. |
|
|
Retrieves the control file from a Web server using HTTPS. Specifying a user name and a password is optional. |
|
|
Retrieve the control file via TFTP. |
|
|
Retrieve the control file via FTP. Specifying a user name and a password is optional. |
|
|
Retrieve the control file from USB devices (AutoYaST will search all connected USB devices). |
|
|
Retrieve the control file from the installation source (install=....). |
|
|
Query the location of the control file from an SLP server
(service:autoyast:...). Optionally you may add a
|
|
|
Looks for control file on a CIFS server. |
|
|
Searches for a control file on a device with the specified label |
Several scenarios for auto-installation are possible using different types of infrastructure and source media. The simplest way is to use the source media (DVD number one) of openSUSE Leap. But to initiate the auto-installation process, the auto-installation command-line variable should be entered at system boot-up and the control file should be accessible for YaST.
In a scripting context, you can use a serial console for your virtual machine, that allows you to work in text mode. Then you can pass the needed parameters from an expect script or equivalent.
The following list of scenarios explains how the control file can be supplied:
When using the original DVD-ROM (DVD #1 is needed), the control file needs to be accessible via flash disk or network:
Flash Disk.
Access the control file via the
autoyast=usb://PATH
option.
Network.
Access the control file via the following commands:
autoyast=nfs://..,
autoyast=ftp://..,
autoyast=http://..,
autoyast=https://..,
autoyast=tftp://.., or
autoyast=cifs://...
In this case, you can include the control file directly on the
DVD-ROM. When placing it in the root directory and naming it
autoinst.xml, it will automatically be found
and used for the installation. Otherwise use
autoyast=file:///PATH
to specify the path to the control file.
When using a DVD-ROM for auto-installation, it is necessary to
instruct the installer to use the DVD-ROM for installation instead of
trying to find the installation files on the network. This can be
done by adding the instmode=cd option to
the kernel command line (this can be automated by adding the
option to the isolinux.cfg file on the DVD).
This option is the most important one because
installations of multiple machines are usually done using SLP or NFS
servers and other network services like BOOTP and DHCP. The easiest
way to make the control file available is to place it in the root
directory of the installation source naming it
autoinst.xml. In this case it will
automatically be found and used for the installation. The control
file can also reside in the following places:
Flash Disk.
Access the control file via the
autoyast=usb://PATH
option.
Network.
Access the control file via the following commands:
autoyast=nfs://..,
autoyast=ftp://..,
autoyast=http://..,
autoyast=https://..,
autoyast=tftp://.., or
autoyast=cifs://...
To disable the network during installations where it is not needed or
unavailable, for example when auto-installing from DVD-ROMs, use the
linuxrc option netsetup=0 to disable the network
setup.
autoyast and
autoyast2 Options
The options autoyast and autoyast2
are very similar but differ in one important point:
When you use autoyast=http://..., you need to
provide linuxrc with the network configuration.
When you use autoyast2=http://..., linuxrc tries
to configure the network for you.
If autoyast=default is defined, YaST will look
for a file named autoinst.xml in the following
three places:
the root directory of the flash disk,
the root directory of the installation medium,
the root directory of the initial RAM disk used to boot the system.
With all AutoYaST invocation options, excluding
default, it is possible to specify the location of
the control file in the following ways:
Specify the exact location of the control file:
autoyast=http://192.168.1.1/control-files/client01.xml
Specify a directory where several control files are located:
autoyast=http://192.168.1.1/control-files/
In this case the relevant control file is retrieved using the hex digit representation of the IP as described below.
If only the path prefix variable is defined, YaST will fetch the control file from the specified location in the following way:
First, it will search for the control file using its own IP address
in uppercase hexadecimal, for example 192.0.2.91 ->
C000025B.
If this file is not found, YaST will remove one hex digit and
try again. This action is repeated until the file with the correct
name is found. Ultimately, it will try looking for a file with the
MAC address of the client as the file name (mac should have the
following syntax: 0080C8F6484C) and if not found a
file named default (in lowercase).
As an example, for 192.0.2.91, the HTTP client will try:
C000025B C000025 C00002 C0000 C000 C00 C0 C 0080C8F6484C default
in that order.
To determine the hex representation of the IP address of the client,
use the utility called /usr/bin/gethostip available
with the syslinux package.
tux > /usr/bin/gethostip 10.10.0.1
10.10.0.1 10.10.0.1 0A0A0001The easiest way to auto-install a system without any network connection is to use the original openSUSE Leap DVD-ROMs and a flash disk. You do not need to set up an installation server nor the network environment.
Create the control file and name it autoinst.xml.
Copy the file autoinst.xml to the flash disk.
info file with the AutoYaST control file #
If you choose to pass information to linuxrc using the
info file, it is possible to integrate the
keywords in the XML control file. In this case the file needs to be
accessible to linuxrc and needs to be named info.
Linuxrc will look for a string (start_linuxrc_conf
in the control file which represents the beginning of the file. If it
is found, it will parse the content starting from that string and will
finish when the string end_linuxrc_conf is found.
The options are stored in the control file in the following way:
....
<install>
....
<init>
<info_file>
<![CDATA[
#
# Do not remove the following line:
# start_linuxrc_conf
#
install: nfs://192.168.1.1/CDs/full-i386
textmode: 1
autoyast: file:///info
# end_linuxrc_conf
# Do not remove the above comment
#
]]>
</info_file>
</init>
......
</install>
....
Note that the autoyast keyword must point to the
same file. If it is on a flash disk, then the option
usb:/// needs to be used. If the
info file is stored in the initial RAM disk, the
file:// option needs to be used.
The system configuration during auto-installation is the most important part of the whole process. As you have seen in the previous chapters, almost anything can be configured automatically on the target system. In addition to the pre-defined directives, you can always use post-scripts to change other things in the system. Additionally you can change any system variables, and if required, copy complete configuration files into the target system.
The post-installation and system configuration are initiated directly after the last package is installed on the target system and continue after the system has booted for the first time.
Before the system is booted for the first time, AutoYaST writes all data collected during installation and writes the boot loader in the specified location. In addition to these regular tasks, AutoYaST executes the chroot-scripts as specified in the control file. Note that these scripts are executed while the system is not yet mounted.
If a different kernel than the default is installed, a hard reboot will
be required. A hard reboot can also be forced during auto-installation,
independent of the installed kernel. Use the reboot
property of the general resource (see
Section 4.1, “General Options”).
Most of the system customization is done in the second stage of the installation. If you require customization that cannot be done using AutoYaST resources, use post-install scripts for further modifications.
You can define an unlimited number of custom scripts in the control file, either by editing the control file or by using the configuration system.
In some cases it is useful to run AutoYaST in a running system.
In the following example, an additional software package
(foo) is going to be installed. To run this
software, a user needs to be added and an NTP client needs to be configured.
The respective AutoYaST profile needs to include a section for the package installation (Section 4.8.8, “Installing Packages in Stage 2”), a user (Section 4.28.1, “Users”) section and an NTP-client (Section 4.19, “NTP Client”) section:
<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
<ntp-client>
<peers config:type="list">
<peer>
<address>us.pool.ntp.org</address>
<comment/>
<options> iburst</options>
<type>server</type>
</peer>
</peers>
<start_at_boot config:type="boolean">true</start_at_boot>
<start_in_chroot config:type="boolean">false</start_in_chroot>
<sync_interval config:type="integer">5</sync_interval>
<synchronize_time config:type="boolean">false</synchronize_time>
</ntp-client>
<software>
<post-packages config:type="list">
<package>ntp</package>
<package>yast2-ntp-client</package>
<package>foo</package>
</post-packages>
</software>
<users config:type="list">
<user>
<encrypted config:type="boolean">false</encrypted>
<fullname>Foo user</fullname>
<gid>100</gid>
<home>/home/foo</home>
<password_settings>
<expire/>
<flag/>
<inact/>
<max>99999</max>
<min>0</min>
<warn>7</warn>
</password_settings>
<shell>/bin/bash</shell>
<uid>1001</uid>
<user_password>linux</user_password>
<username>foo</username>
</user>
</users>
</profile>
Store this file as /tmp/install_foo.xml and start the
AutoYaST installation process by calling:
tux >sudoyast2 ayast_setup setup filename=/tmp/install_foo.xml dopackages="yes"
For more information, run yast2 ayast_setup longhelp
The following figure illustrates how rules are handled and the processes of retrieval and merge.
On all openSUSE Leap versions, the automatic installation gets invoked by
adding autoyast=<PATH_TO_PROFILE> to the kernel
parameter list. So for example adding
autoyast=http://MYSERVER/MYCONFIG.xml
will start an automatic installation where the profile with the AutoYaST
configuration gets fetched from the Web server
myserver. See Section 6.3, “Invoking the Auto-Installation Process” for
more information.
A profile is the AutoYaST configuration file. The content of the AutoYaST profile determines how the system will be configured and which packages will get installed. This includes partitioning, network setup, and software sources, to name but a few. Almost everything that can be configured with YaST in a running system can also be configured in an AutoYaST profile. The profile format is an ASCII XML file.
The easiest way to create an AutoYaST profile is to use an existing
openSUSE Leap system
as a template. On an already installed system, start › › . Now select › from the menu. Choose the system components you want
to include in the profile. Alternatively, create a profile containing
the complete system configuration by running sudo yast
clone_system from the command line.
Both methods will create the file
/root/autoinst.xml. The version created on the
command line can be used to set up an identical clone of the system on
which the profile was created. However, usually you will want to adjust
the file to make it possible to install several machines that are very
similar, but not identical. This can be done by adjusting the profile
using your favorite text/XML editor.
The most efficient way to check your created AutoYaST profile is by
using jing or xmllint.
See Section 3.3, “Creating/Editing a Control File Manually” for details.
If a section has not been defined in the AutoYaST profile
the settings of the general YaST installation proposal will be used.
However, you need to specify at least the root password to be able
to log in to the machine after the installation.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
<users config:type="list">
<user>
<encrypted config:type="boolean">false</encrypted>
<user_password>linux</user_password>
<username>root</username>
</user>
</users>
</profile>
Use the following sound section in your profile:
<sound> <autoinstall config:type="boolean">true</autoinstall> <configure_detected config:type="boolean">true</configure_detected> </sound>
Put the profile in the root of the DVD. Refer to it with
file:///PROFILE.xml.
To merge two profiles, a.xml with
base.xml, run the following command:
tux > /usr/bin/xsltproc --novalid --param replace "'false'" \
--param dontmerge1 "'package'" --param with "'a.xml'" --output out.xml \
/usr/share/autoinstall/xslt/merge.xslt base.xmlThis requires sections in both profiles to be in alphabetical order (<software>, for example, needs to be listed after <add-on>). If you have created the profile with YaST, profiles are automatically sorted correctly.
The dontmerge1 parameter is optional and an
example of what to do when you use the dont_merge
element in your profile. See Section 5.4, “Merging of Rules and Classes” for more
information.
Zypper can only be called from AutoYaST init scripts, because during the post-script phase, YaST still has an exclusive lock on the RPM database.
If you really need to use other script types (for example a post-script) you will need to break the lock at your own risk:
<post-scripts config:type="list">
<script>
<filename>yast_clone.sh</filename>
<interpreter>shell</interpreter>
<location/>
<feedback config:type="boolean">false</feedback>
<source><![CDATA[#!/bin/sh
mv /var/run/zypp.pid /var/run/zypp.sav
zypper in foo
mv /var/run/zypp.sav /var/run/zypp.pid
]]></source>
</script>
</post-scripts>Actually the order is not important. The order of sections in the profile has no influence on the AutoYaST workflow. However, if you want to merge different profiles, sections need to be in alphabetical order.
File not
signed. I need to manually interact.
Linuxrc found some unsigned file (like a driver update). To
use an unsigned file, you can suppress that message by passing
insecure=1 to the linuxrc parameter list (together
with the autoyast=... parameter).
You need to pass ifcfg to linuxrc. This is required
to set up the network, otherwise AutoYaST cannot download the
profile from remote. See Section C.3, “Advanced Network Setup” for more
information.
/) possible?
Yes, but it is a little bit “tricky”. You will need to set up the environment (DHCP, TFTP, etc.) very carefully. The AutoYaST profile needs to look like the following:
<?xml version="1.0"?>
<!DOCTYPE profile>
<profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns">
<partitioning config:type="list">
<drive>
<device>/dev/nfs</device>
<initialize config:type="boolean">false</initialize>
<type config:type="symbol">CT_NFS</type>
<partitions config:type="list">
<partition>
<filesystem config:type="symbol">nfs</filesystem>
<fstopt>nolock</fstopt>
<device>10.10.1.53:/tmp/m4</device>
<mount>/</mount>
</partition>
</partitions>
<use>all</use>
</drive>
</partitioning>
</profile>There is an AutoYaST mailing list where you can post your questions. Join us at http://lists.opensuse.org/opensuse-autoinstall/.
Linuxrc is a program used for setting up the kernel for installation purposes. It allows the user to load modules, start an installed system, a rescue system or an installation via YaST.
Linuxrc is designed to be as small as possible. Therefore, all needed programs are linked directly into one binary. So there is no need for shared libraries in the init disk.
If you run Linuxrc on an installed system, it will work slightly differently so as not to destroy your installation. As a consequence you cannot test all features this way.
Unless Linuxrc is in manual mode, it will look for an
info file in these locations: first
/info on the flash disk and if that does not exist,
for /info in the initrd. After that it parses the
kernel command line for parameters. You may change the
info file Linuxrc reads by setting the
info command line parameter. If you do not want
Linuxrc to read the kernel command line (for example because you need to
specify a kernel parameter that Linuxrc recognizes as well), use
linuxrc=nocmdline.
Linuxrc will always look for and parse a file
/linuxrc.config. Use this file to change default
values if you need to. In general, it is better to use the
info file instead. Note that
/linuxrc.config is read before any
info file, even in manual mode.
info file format #
Lines starting with # are comments, valid entries are
of the form:
key: value
Note that value extends to the end of the line and
therefore may contain spaces. key is matched
case-insensitive.
You can use the same key-value pairs on the kernel command line using
the syntax key=value. Lines that do not have the form
described above are ignored.
The table below lists important keys and example values. For a complete list of linuxrc parameters refer to https://en.opensuse.org/SDB:Linuxrc.
|
Keyword: Example Value |
Description |
|---|---|
|
|
If 0, never ask for swap; if the argument is a positive number
|
|
|
Location of the auto installation file; activates auto installation mode. See Table 6.2, “Command Line Variables for AutoYaST” for details. |
|
|
10 seconds timeout for BOOTP requests. |
|
|
Sleep 5 seconds between network activation and starting bootp. |
|
|
Set the menu color scheme. |
|
|
Run command. |
|
|
Use the |
|
|
Load the installation system into RAM disk. |
|
|
Set up and start the network. See Section C.3, “Advanced Network Setup” for more information. |
|
|
Load MODULE. |
|
|
Install from the repository specified with URL. For the syntax of URL refer to https://en.opensuse.org/SDB:Linuxrc#url_descr. |
|
|
Virtual console keyboard map to load. |
|
|
Language preselected for the installation. |
|
|
Enable remote logging via syslog. |
|
|
Load installation system into RAM disk if free memory is above 50000 KB. |
|
|
Ask for swap if free memory drops below 10000 KB. |
|
|
Run YaST in text mode if free memory is below 20000 KB. |
|
|
Ask for swap before starting YaST if free memory is below 10000 KB. |
|
|
Proxy (either FTP or HTTP). |
|
|
Load the rescue system; the URL variant specifies the location of the rescue image explicitly. |
|
|
Location of the rescue system image. |
|
|
Location of the installation system image. |
|
|
Start YaST in text mode. |
|
|
Wait 4 seconds after loading the USB modules. |
|
|
Overrides the confirm parameter in a control file and requests confirmation of installation proposal. |
Even if parameters like hostip,
nameserver, and gateway are passed to
linuxrc, the network is only started when it is needed (for example, when
installing via SSH or VNC). Since autoyast is not a linuxrc
parameter (this parameter is ignored by linuxrc and only passed to YaST),
the network will not be started automatically when
specifying a remote location for the AutoYaST profile.
Therefore the network needs to be started explicitly. This used to be done
with the linuxrc parameter netsetup. Starting with
openSUSE Leap 13.2, the parameter ifcfg is
available. It offers more configuration options, for example configuring
more than one interface. ifcfg directly controls the
content of the /etc/sysconfig/network/ifcfg-* files.
The general syntax to configure DHCP is
ifcfg=INTERFACE=DHCP*,OPTION1=VALUE1,OPTION2=VALUE2
where INTERFACE is the interface name, for
example eth0, or eth* for all
interfaces. DHCP* can either be
dhcp (IPv4 and IPv6), dhcp4, or
dhcp6.
To set up DHCP for eth0 use:
ifcfg=eth0=dhcp
To set up DHCP on all interfaces use:
ifcfg=eth*=dhcp
The general syntax to configure a static network is
ifcfg=INTERFACE=IP_LIST,GATEWAY_LIST,NAMESERVER_LIST,DOMAINSEARCH_LIST,\ OPTION1=value1,...
where INTERFACE is the interface name, for
example eth0. If using eth*, the
first device available will be used. The other parameters need to be
replaced with the respective values in the given order. Example:
ifcfg=eth0=192.168.2.100/24,192.168.5.1,192.168.1.116,example.com
When specifying multiple addresses for a parameter, use spaces to separate them and quote the complete string. The following example uses two name servers and a search list containing two domains.
ifcfg="eth0=192.168.2.100/24,192.168.5.1,192.168.1.116 192.168.1.117,example.com example.net"
For more information refer to https://en.opensuse.org/SDB:Linuxrc#Network_Configuration.
This appendix contains the GNU Free Documentation License version 1.2.
Copyright (C) 2000, 2001, 2002 Free Software Foundation, Inc. 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.
This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.
We have designed this License to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.
This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.
A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.
The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.
A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.
You may copy and distribute the Document in any medium, either commercially or non-commercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly display copies.
If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.
You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:
Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
State on the Title page the name of the publisher of the Modified Version, as the publisher.
Preserve all the copyright notices of the Document.
Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
Include an unaltered copy of this License.
Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version.
Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section.
Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.
You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements".
You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.
You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.
A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.
Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.
You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNU Free Documentation License”.
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.
If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.